00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 203 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3705 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.130 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.131 The recommended git tool is: git 00:00:00.132 using credential 00000000-0000-0000-0000-000000000002 00:00:00.134 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.184 Fetching changes from the remote Git repository 00:00:00.187 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.234 Using shallow fetch with depth 1 00:00:00.234 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.234 > git --version # timeout=10 00:00:00.278 > git --version # 'git version 2.39.2' 00:00:00.278 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.307 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.307 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.533 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.545 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.558 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.559 > git config core.sparsecheckout # timeout=10 00:00:07.570 > git read-tree -mu HEAD # timeout=10 00:00:07.589 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.613 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.614 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.715 [Pipeline] Start of Pipeline 00:00:07.725 [Pipeline] library 00:00:07.726 Loading library shm_lib@master 00:00:07.727 Library shm_lib@master is cached. Copying from home. 00:00:07.744 [Pipeline] node 00:00:07.755 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.757 [Pipeline] { 00:00:07.764 [Pipeline] catchError 00:00:07.766 [Pipeline] { 00:00:07.774 [Pipeline] wrap 00:00:07.782 [Pipeline] { 00:00:07.792 [Pipeline] stage 00:00:07.795 [Pipeline] { (Prologue) 00:00:08.014 [Pipeline] sh 00:00:08.293 + logger -p user.info -t JENKINS-CI 00:00:08.307 [Pipeline] echo 00:00:08.308 Node: WFP8 00:00:08.314 [Pipeline] sh 00:00:08.607 [Pipeline] setCustomBuildProperty 00:00:08.617 [Pipeline] echo 00:00:08.619 Cleanup processes 00:00:08.625 [Pipeline] sh 00:00:08.904 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.904 942465 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.916 [Pipeline] sh 00:00:09.204 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.204 ++ grep -v 'sudo pgrep' 00:00:09.204 ++ awk '{print $1}' 00:00:09.204 + sudo kill -9 00:00:09.204 + true 00:00:09.217 [Pipeline] cleanWs 00:00:09.227 [WS-CLEANUP] Deleting project workspace... 00:00:09.227 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.233 [WS-CLEANUP] done 00:00:09.235 [Pipeline] setCustomBuildProperty 00:00:09.245 [Pipeline] sh 00:00:09.602 + sudo git config --global --replace-all safe.directory '*' 00:00:09.698 [Pipeline] httpRequest 00:00:10.366 [Pipeline] echo 00:00:10.368 Sorcerer 10.211.164.20 is alive 00:00:10.378 [Pipeline] retry 00:00:10.380 [Pipeline] { 00:00:10.394 [Pipeline] httpRequest 00:00:10.399 HttpMethod: GET 00:00:10.400 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.400 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.410 Response Code: HTTP/1.1 200 OK 00:00:10.411 Success: Status code 200 is in the accepted range: 200,404 00:00:10.411 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.953 [Pipeline] } 00:00:12.971 [Pipeline] // retry 00:00:12.979 [Pipeline] sh 00:00:13.262 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.278 [Pipeline] httpRequest 00:00:13.663 [Pipeline] echo 00:00:13.665 Sorcerer 10.211.164.20 is alive 00:00:13.674 [Pipeline] retry 00:00:13.676 [Pipeline] { 00:00:13.690 [Pipeline] httpRequest 00:00:13.694 HttpMethod: GET 00:00:13.694 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:13.695 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:13.705 Response Code: HTTP/1.1 200 OK 00:00:13.705 Success: Status code 200 is in the accepted range: 200,404 00:00:13.705 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:54.196 [Pipeline] } 00:01:54.214 [Pipeline] // retry 00:01:54.221 [Pipeline] sh 00:01:54.504 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:57.044 [Pipeline] sh 00:01:57.323 + git -C spdk log --oneline -n5 00:01:57.323 b18e1bd62 version: v24.09.1-pre 00:01:57.323 19524ad45 version: v24.09 00:01:57.323 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:57.323 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:57.323 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:57.339 [Pipeline] withCredentials 00:01:57.349 > git --version # timeout=10 00:01:57.358 > git --version # 'git version 2.39.2' 00:01:57.372 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:57.374 [Pipeline] { 00:01:57.382 [Pipeline] retry 00:01:57.384 [Pipeline] { 00:01:57.398 [Pipeline] sh 00:01:57.679 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:57.689 [Pipeline] } 00:01:57.704 [Pipeline] // retry 00:01:57.707 [Pipeline] } 00:01:57.722 [Pipeline] // withCredentials 00:01:57.729 [Pipeline] httpRequest 00:01:58.060 [Pipeline] echo 00:01:58.062 Sorcerer 10.211.164.20 is alive 00:01:58.070 [Pipeline] retry 00:01:58.072 [Pipeline] { 00:01:58.086 [Pipeline] httpRequest 00:01:58.090 HttpMethod: GET 00:01:58.091 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:58.091 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:58.093 Response Code: HTTP/1.1 200 OK 00:01:58.093 Success: Status code 200 is in the accepted range: 200,404 00:01:58.093 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:01.793 [Pipeline] } 00:02:01.824 [Pipeline] // retry 00:02:01.829 [Pipeline] sh 00:02:02.103 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:03.490 [Pipeline] sh 00:02:03.771 + git -C dpdk log --oneline -n5 00:02:03.771 caf0f5d395 version: 22.11.4 00:02:03.771 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:03.771 dc9c799c7d vhost: fix missing spinlock unlock 00:02:03.771 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:03.771 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:03.780 [Pipeline] } 00:02:03.794 [Pipeline] // stage 00:02:03.803 [Pipeline] stage 00:02:03.805 [Pipeline] { (Prepare) 00:02:03.824 [Pipeline] writeFile 00:02:03.838 [Pipeline] sh 00:02:04.116 + logger -p user.info -t JENKINS-CI 00:02:04.129 [Pipeline] sh 00:02:04.409 + logger -p user.info -t JENKINS-CI 00:02:04.420 [Pipeline] sh 00:02:04.699 + cat autorun-spdk.conf 00:02:04.699 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.699 SPDK_TEST_NVMF=1 00:02:04.699 SPDK_TEST_NVME_CLI=1 00:02:04.699 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:04.699 SPDK_TEST_NVMF_NICS=e810 00:02:04.699 SPDK_TEST_VFIOUSER=1 00:02:04.699 SPDK_RUN_UBSAN=1 00:02:04.699 NET_TYPE=phy 00:02:04.699 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:04.699 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:04.707 RUN_NIGHTLY=1 00:02:04.713 [Pipeline] readFile 00:02:04.740 [Pipeline] withEnv 00:02:04.743 [Pipeline] { 00:02:04.756 [Pipeline] sh 00:02:05.038 + set -ex 00:02:05.038 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:05.038 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.038 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.038 ++ SPDK_TEST_NVMF=1 00:02:05.038 ++ SPDK_TEST_NVME_CLI=1 00:02:05.038 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.038 ++ SPDK_TEST_NVMF_NICS=e810 00:02:05.038 ++ SPDK_TEST_VFIOUSER=1 00:02:05.038 ++ SPDK_RUN_UBSAN=1 00:02:05.038 ++ NET_TYPE=phy 00:02:05.038 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:05.038 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.038 ++ RUN_NIGHTLY=1 00:02:05.038 + case $SPDK_TEST_NVMF_NICS in 00:02:05.038 + DRIVERS=ice 00:02:05.038 + [[ tcp == \r\d\m\a ]] 00:02:05.038 + [[ -n ice ]] 00:02:05.038 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:05.038 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:05.038 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:05.038 rmmod: ERROR: Module irdma is not currently loaded 00:02:05.038 rmmod: ERROR: Module i40iw is not currently loaded 00:02:05.038 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:05.038 + true 00:02:05.038 + for D in $DRIVERS 00:02:05.038 + sudo modprobe ice 00:02:05.038 + exit 0 00:02:05.047 [Pipeline] } 00:02:05.064 [Pipeline] // withEnv 00:02:05.069 [Pipeline] } 00:02:05.083 [Pipeline] // stage 00:02:05.092 [Pipeline] catchError 00:02:05.093 [Pipeline] { 00:02:05.107 [Pipeline] timeout 00:02:05.107 Timeout set to expire in 1 hr 0 min 00:02:05.113 [Pipeline] { 00:02:05.128 [Pipeline] stage 00:02:05.130 [Pipeline] { (Tests) 00:02:05.144 [Pipeline] sh 00:02:05.426 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:05.426 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:05.426 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:05.426 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:05.426 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.426 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:05.426 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:05.426 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:05.426 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:05.426 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:05.426 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:05.426 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:05.426 + source /etc/os-release 00:02:05.426 ++ NAME='Fedora Linux' 00:02:05.426 ++ VERSION='39 (Cloud Edition)' 00:02:05.426 ++ ID=fedora 00:02:05.426 ++ VERSION_ID=39 00:02:05.426 ++ VERSION_CODENAME= 00:02:05.426 ++ PLATFORM_ID=platform:f39 00:02:05.426 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:05.426 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:05.426 ++ LOGO=fedora-logo-icon 00:02:05.426 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:05.426 ++ HOME_URL=https://fedoraproject.org/ 00:02:05.426 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:05.426 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:05.426 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:05.426 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:05.426 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:05.426 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:05.426 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:05.426 ++ SUPPORT_END=2024-11-12 00:02:05.426 ++ VARIANT='Cloud Edition' 00:02:05.426 ++ VARIANT_ID=cloud 00:02:05.426 + uname -a 00:02:05.426 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:05.426 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:07.957 Hugepages 00:02:07.957 node hugesize free / total 00:02:07.957 node0 1048576kB 0 / 0 00:02:07.957 node0 2048kB 0 / 0 00:02:07.957 node1 1048576kB 0 / 0 00:02:07.957 node1 2048kB 0 / 0 00:02:07.957 00:02:07.957 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.957 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:07.957 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:07.957 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:07.957 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:07.957 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:07.957 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:07.957 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:07.957 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:07.957 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:07.957 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:07.957 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:07.957 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:07.957 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:07.957 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:07.957 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:07.957 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:07.957 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:07.957 + rm -f /tmp/spdk-ld-path 00:02:07.957 + source autorun-spdk.conf 00:02:07.957 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.957 ++ SPDK_TEST_NVMF=1 00:02:07.957 ++ SPDK_TEST_NVME_CLI=1 00:02:07.957 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.957 ++ SPDK_TEST_NVMF_NICS=e810 00:02:07.957 ++ SPDK_TEST_VFIOUSER=1 00:02:07.957 ++ SPDK_RUN_UBSAN=1 00:02:07.957 ++ NET_TYPE=phy 00:02:07.957 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:07.957 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.957 ++ RUN_NIGHTLY=1 00:02:07.957 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.957 + [[ -n '' ]] 00:02:07.957 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.957 + for M in /var/spdk/build-*-manifest.txt 00:02:07.957 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:07.957 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.957 + for M in /var/spdk/build-*-manifest.txt 00:02:07.957 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:07.957 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.957 + for M in /var/spdk/build-*-manifest.txt 00:02:07.957 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:07.957 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.957 ++ uname 00:02:07.957 + [[ Linux == \L\i\n\u\x ]] 00:02:07.957 + sudo dmesg -T 00:02:07.957 + sudo dmesg --clear 00:02:08.215 + dmesg_pid=943939 00:02:08.215 + [[ Fedora Linux == FreeBSD ]] 00:02:08.215 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.215 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.215 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:08.215 + [[ -x /usr/src/fio-static/fio ]] 00:02:08.215 + sudo dmesg -Tw 00:02:08.215 + export FIO_BIN=/usr/src/fio-static/fio 00:02:08.215 + FIO_BIN=/usr/src/fio-static/fio 00:02:08.215 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:08.215 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:08.215 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:08.215 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.215 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.215 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:08.215 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.215 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.215 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:08.215 Test configuration: 00:02:08.215 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.215 SPDK_TEST_NVMF=1 00:02:08.215 SPDK_TEST_NVME_CLI=1 00:02:08.215 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.215 SPDK_TEST_NVMF_NICS=e810 00:02:08.215 SPDK_TEST_VFIOUSER=1 00:02:08.215 SPDK_RUN_UBSAN=1 00:02:08.215 NET_TYPE=phy 00:02:08.215 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:08.215 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:08.215 RUN_NIGHTLY=1 09:36:36 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:08.215 09:36:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:08.215 09:36:36 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:08.215 09:36:36 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.215 09:36:36 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.215 09:36:36 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.215 09:36:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.215 09:36:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.215 09:36:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.215 09:36:36 -- paths/export.sh@5 -- $ export PATH 00:02:08.215 09:36:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.215 09:36:36 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:08.215 09:36:36 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:08.215 09:36:36 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733560596.XXXXXX 00:02:08.215 09:36:36 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733560596.6gKQTO 00:02:08.215 09:36:36 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:08.215 09:36:36 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:02:08.215 09:36:36 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:08.215 09:36:36 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:08.215 09:36:36 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:08.215 09:36:36 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.215 09:36:36 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:08.215 09:36:36 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:08.215 09:36:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.216 09:36:36 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:08.216 09:36:36 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:08.216 09:36:36 -- pm/common@17 -- $ local monitor 00:02:08.216 09:36:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.216 09:36:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.216 09:36:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.216 09:36:36 -- pm/common@21 -- $ date +%s 00:02:08.216 09:36:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.216 09:36:36 -- pm/common@21 -- $ date +%s 00:02:08.216 09:36:36 -- pm/common@25 -- $ sleep 1 00:02:08.216 09:36:36 -- pm/common@21 -- $ date +%s 00:02:08.216 09:36:36 -- pm/common@21 -- $ date +%s 00:02:08.216 09:36:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733560596 00:02:08.216 09:36:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733560596 00:02:08.216 09:36:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733560596 00:02:08.216 09:36:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733560596 00:02:08.216 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733560596_collect-vmstat.pm.log 00:02:08.216 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733560596_collect-cpu-load.pm.log 00:02:08.216 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733560596_collect-cpu-temp.pm.log 00:02:08.216 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733560596_collect-bmc-pm.bmc.pm.log 00:02:09.153 09:36:37 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:09.153 09:36:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:09.153 09:36:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:09.153 09:36:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:09.153 09:36:37 -- spdk/autobuild.sh@16 -- $ date -u 00:02:09.153 Sat Dec 7 08:36:37 AM UTC 2024 00:02:09.153 09:36:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:09.153 v24.09-1-gb18e1bd62 00:02:09.153 09:36:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:09.153 09:36:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:09.153 09:36:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:09.153 09:36:37 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:09.153 09:36:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:09.153 09:36:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.413 ************************************ 00:02:09.413 START TEST ubsan 00:02:09.413 ************************************ 00:02:09.413 09:36:37 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:09.413 using ubsan 00:02:09.413 00:02:09.413 real 0m0.000s 00:02:09.413 user 0m0.000s 00:02:09.413 sys 0m0.000s 00:02:09.413 09:36:37 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:09.413 09:36:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.413 ************************************ 00:02:09.413 END TEST ubsan 00:02:09.413 ************************************ 00:02:09.413 09:36:37 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:09.413 09:36:37 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:09.413 09:36:37 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:09.413 09:36:37 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:09.413 09:36:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:09.413 09:36:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.413 ************************************ 00:02:09.413 START TEST build_native_dpdk 00:02:09.413 ************************************ 00:02:09.413 09:36:37 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:09.413 caf0f5d395 version: 22.11.4 00:02:09.413 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:09.413 dc9c799c7d vhost: fix missing spinlock unlock 00:02:09.413 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:09.413 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.413 09:36:37 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:09.413 09:36:37 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:09.413 patching file config/rte_config.h 00:02:09.413 Hunk #1 succeeded at 60 (offset 1 line). 00:02:09.413 09:36:38 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:09.413 09:36:38 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:09.413 09:36:38 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:09.413 09:36:38 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:09.413 09:36:38 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:09.413 09:36:38 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:09.414 09:36:38 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:09.414 patching file lib/pcapng/rte_pcapng.c 00:02:09.414 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:09.414 09:36:38 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:09.414 09:36:38 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:09.414 09:36:38 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:09.414 09:36:38 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:09.414 09:36:38 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:09.414 09:36:38 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:09.414 09:36:38 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:14.686 The Meson build system 00:02:14.686 Version: 1.5.0 00:02:14.686 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:14.686 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:14.686 Build type: native build 00:02:14.686 Program cat found: YES (/usr/bin/cat) 00:02:14.686 Project name: DPDK 00:02:14.686 Project version: 22.11.4 00:02:14.686 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.686 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:14.686 Host machine cpu family: x86_64 00:02:14.686 Host machine cpu: x86_64 00:02:14.686 Message: ## Building in Developer Mode ## 00:02:14.686 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.687 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:14.687 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.687 Program objdump found: YES (/usr/bin/objdump) 00:02:14.687 Program python3 found: YES (/usr/bin/python3) 00:02:14.687 Program cat found: YES (/usr/bin/cat) 00:02:14.687 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:14.687 Checking for size of "void *" : 8 00:02:14.687 Checking for size of "void *" : 8 (cached) 00:02:14.687 Library m found: YES 00:02:14.687 Library numa found: YES 00:02:14.687 Has header "numaif.h" : YES 00:02:14.687 Library fdt found: NO 00:02:14.687 Library execinfo found: NO 00:02:14.687 Has header "execinfo.h" : YES 00:02:14.687 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.687 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.687 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.687 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.687 Run-time dependency openssl found: YES 3.1.1 00:02:14.687 Run-time dependency libpcap found: YES 1.10.4 00:02:14.687 Has header "pcap.h" with dependency libpcap: YES 00:02:14.687 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.687 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.687 Compiler for C supports arguments -Wformat: YES 00:02:14.687 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.687 Compiler for C supports arguments -Wformat-security: NO 00:02:14.687 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.687 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.687 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.687 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.687 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.687 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.687 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.687 Compiler for C supports arguments -Wundef: YES 00:02:14.687 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.687 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.687 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.687 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.687 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.687 Compiler for C supports arguments -mavx512f: YES 00:02:14.687 Checking if "AVX512 checking" compiles: YES 00:02:14.687 Fetching value of define "__SSE4_2__" : 1 00:02:14.687 Fetching value of define "__AES__" : 1 00:02:14.687 Fetching value of define "__AVX__" : 1 00:02:14.687 Fetching value of define "__AVX2__" : 1 00:02:14.687 Fetching value of define "__AVX512BW__" : 1 00:02:14.687 Fetching value of define "__AVX512CD__" : 1 00:02:14.687 Fetching value of define "__AVX512DQ__" : 1 00:02:14.687 Fetching value of define "__AVX512F__" : 1 00:02:14.687 Fetching value of define "__AVX512VL__" : 1 00:02:14.687 Fetching value of define "__PCLMUL__" : 1 00:02:14.687 Fetching value of define "__RDRND__" : 1 00:02:14.687 Fetching value of define "__RDSEED__" : 1 00:02:14.687 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:14.687 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.687 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.687 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.687 Checking for function "getentropy" : YES 00:02:14.687 Message: lib/eal: Defining dependency "eal" 00:02:14.687 Message: lib/ring: Defining dependency "ring" 00:02:14.687 Message: lib/rcu: Defining dependency "rcu" 00:02:14.687 Message: lib/mempool: Defining dependency "mempool" 00:02:14.687 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.687 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.687 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.687 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.687 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.687 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.687 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:14.687 Compiler for C supports arguments -mpclmul: YES 00:02:14.687 Compiler for C supports arguments -maes: YES 00:02:14.687 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.687 Compiler for C supports arguments -mavx512bw: YES 00:02:14.687 Compiler for C supports arguments -mavx512dq: YES 00:02:14.687 Compiler for C supports arguments -mavx512vl: YES 00:02:14.687 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.687 Compiler for C supports arguments -mavx2: YES 00:02:14.687 Compiler for C supports arguments -mavx: YES 00:02:14.687 Message: lib/net: Defining dependency "net" 00:02:14.687 Message: lib/meter: Defining dependency "meter" 00:02:14.687 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.687 Message: lib/pci: Defining dependency "pci" 00:02:14.687 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.687 Message: lib/metrics: Defining dependency "metrics" 00:02:14.687 Message: lib/hash: Defining dependency "hash" 00:02:14.687 Message: lib/timer: Defining dependency "timer" 00:02:14.687 Fetching value of define "__AVX2__" : 1 (cached) 00:02:14.687 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.687 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.687 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:14.687 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.687 Message: lib/acl: Defining dependency "acl" 00:02:14.687 Message: lib/bbdev: Defining dependency "bbdev" 00:02:14.687 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:14.687 Run-time dependency libelf found: YES 0.191 00:02:14.687 Message: lib/bpf: Defining dependency "bpf" 00:02:14.687 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:14.687 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.687 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.687 Message: lib/distributor: Defining dependency "distributor" 00:02:14.687 Message: lib/efd: Defining dependency "efd" 00:02:14.687 Message: lib/eventdev: Defining dependency "eventdev" 00:02:14.687 Message: lib/gpudev: Defining dependency "gpudev" 00:02:14.687 Message: lib/gro: Defining dependency "gro" 00:02:14.687 Message: lib/gso: Defining dependency "gso" 00:02:14.687 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:14.687 Message: lib/jobstats: Defining dependency "jobstats" 00:02:14.687 Message: lib/latencystats: Defining dependency "latencystats" 00:02:14.687 Message: lib/lpm: Defining dependency "lpm" 00:02:14.687 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.687 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.687 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:14.687 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:14.687 Message: lib/member: Defining dependency "member" 00:02:14.687 Message: lib/pcapng: Defining dependency "pcapng" 00:02:14.687 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.687 Message: lib/power: Defining dependency "power" 00:02:14.687 Message: lib/rawdev: Defining dependency "rawdev" 00:02:14.687 Message: lib/regexdev: Defining dependency "regexdev" 00:02:14.687 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.687 Message: lib/rib: Defining dependency "rib" 00:02:14.687 Message: lib/reorder: Defining dependency "reorder" 00:02:14.687 Message: lib/sched: Defining dependency "sched" 00:02:14.687 Message: lib/security: Defining dependency "security" 00:02:14.687 Message: lib/stack: Defining dependency "stack" 00:02:14.687 Has header "linux/userfaultfd.h" : YES 00:02:14.687 Message: lib/vhost: Defining dependency "vhost" 00:02:14.687 Message: lib/ipsec: Defining dependency "ipsec" 00:02:14.687 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.687 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.687 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.687 Message: lib/fib: Defining dependency "fib" 00:02:14.687 Message: lib/port: Defining dependency "port" 00:02:14.688 Message: lib/pdump: Defining dependency "pdump" 00:02:14.688 Message: lib/table: Defining dependency "table" 00:02:14.688 Message: lib/pipeline: Defining dependency "pipeline" 00:02:14.688 Message: lib/graph: Defining dependency "graph" 00:02:14.688 Message: lib/node: Defining dependency "node" 00:02:14.688 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:14.688 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.688 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.688 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.688 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:14.688 Compiler for C supports arguments -Wno-unused-value: YES 00:02:14.688 Compiler for C supports arguments -Wno-format: YES 00:02:14.688 Compiler for C supports arguments -Wno-format-security: YES 00:02:14.688 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:14.946 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:14.946 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:14.946 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:14.946 Fetching value of define "__AVX2__" : 1 (cached) 00:02:14.946 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.946 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.946 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.946 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:14.946 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:14.946 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:14.946 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:14.946 Configuring doxy-api.conf using configuration 00:02:14.946 Program sphinx-build found: NO 00:02:14.946 Configuring rte_build_config.h using configuration 00:02:14.946 Message: 00:02:14.946 ================= 00:02:14.946 Applications Enabled 00:02:14.946 ================= 00:02:14.946 00:02:14.946 apps: 00:02:14.946 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:14.946 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:14.946 test-security-perf, 00:02:14.946 00:02:14.946 Message: 00:02:14.946 ================= 00:02:14.946 Libraries Enabled 00:02:14.946 ================= 00:02:14.946 00:02:14.946 libs: 00:02:14.946 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:14.946 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:14.946 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:14.946 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:14.946 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:14.946 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:14.946 table, pipeline, graph, node, 00:02:14.946 00:02:14.946 Message: 00:02:14.946 =============== 00:02:14.946 Drivers Enabled 00:02:14.946 =============== 00:02:14.946 00:02:14.946 common: 00:02:14.946 00:02:14.946 bus: 00:02:14.947 pci, vdev, 00:02:14.947 mempool: 00:02:14.947 ring, 00:02:14.947 dma: 00:02:14.947 00:02:14.947 net: 00:02:14.947 i40e, 00:02:14.947 raw: 00:02:14.947 00:02:14.947 crypto: 00:02:14.947 00:02:14.947 compress: 00:02:14.947 00:02:14.947 regex: 00:02:14.947 00:02:14.947 vdpa: 00:02:14.947 00:02:14.947 event: 00:02:14.947 00:02:14.947 baseband: 00:02:14.947 00:02:14.947 gpu: 00:02:14.947 00:02:14.947 00:02:14.947 Message: 00:02:14.947 ================= 00:02:14.947 Content Skipped 00:02:14.947 ================= 00:02:14.947 00:02:14.947 apps: 00:02:14.947 00:02:14.947 libs: 00:02:14.947 kni: explicitly disabled via build config (deprecated lib) 00:02:14.947 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:14.947 00:02:14.947 drivers: 00:02:14.947 common/cpt: not in enabled drivers build config 00:02:14.947 common/dpaax: not in enabled drivers build config 00:02:14.947 common/iavf: not in enabled drivers build config 00:02:14.947 common/idpf: not in enabled drivers build config 00:02:14.947 common/mvep: not in enabled drivers build config 00:02:14.947 common/octeontx: not in enabled drivers build config 00:02:14.947 bus/auxiliary: not in enabled drivers build config 00:02:14.947 bus/dpaa: not in enabled drivers build config 00:02:14.947 bus/fslmc: not in enabled drivers build config 00:02:14.947 bus/ifpga: not in enabled drivers build config 00:02:14.947 bus/vmbus: not in enabled drivers build config 00:02:14.947 common/cnxk: not in enabled drivers build config 00:02:14.947 common/mlx5: not in enabled drivers build config 00:02:14.947 common/qat: not in enabled drivers build config 00:02:14.947 common/sfc_efx: not in enabled drivers build config 00:02:14.947 mempool/bucket: not in enabled drivers build config 00:02:14.947 mempool/cnxk: not in enabled drivers build config 00:02:14.947 mempool/dpaa: not in enabled drivers build config 00:02:14.947 mempool/dpaa2: not in enabled drivers build config 00:02:14.947 mempool/octeontx: not in enabled drivers build config 00:02:14.947 mempool/stack: not in enabled drivers build config 00:02:14.947 dma/cnxk: not in enabled drivers build config 00:02:14.947 dma/dpaa: not in enabled drivers build config 00:02:14.947 dma/dpaa2: not in enabled drivers build config 00:02:14.947 dma/hisilicon: not in enabled drivers build config 00:02:14.947 dma/idxd: not in enabled drivers build config 00:02:14.947 dma/ioat: not in enabled drivers build config 00:02:14.947 dma/skeleton: not in enabled drivers build config 00:02:14.947 net/af_packet: not in enabled drivers build config 00:02:14.947 net/af_xdp: not in enabled drivers build config 00:02:14.947 net/ark: not in enabled drivers build config 00:02:14.947 net/atlantic: not in enabled drivers build config 00:02:14.947 net/avp: not in enabled drivers build config 00:02:14.947 net/axgbe: not in enabled drivers build config 00:02:14.947 net/bnx2x: not in enabled drivers build config 00:02:14.947 net/bnxt: not in enabled drivers build config 00:02:14.947 net/bonding: not in enabled drivers build config 00:02:14.947 net/cnxk: not in enabled drivers build config 00:02:14.947 net/cxgbe: not in enabled drivers build config 00:02:14.947 net/dpaa: not in enabled drivers build config 00:02:14.947 net/dpaa2: not in enabled drivers build config 00:02:14.947 net/e1000: not in enabled drivers build config 00:02:14.947 net/ena: not in enabled drivers build config 00:02:14.947 net/enetc: not in enabled drivers build config 00:02:14.947 net/enetfec: not in enabled drivers build config 00:02:14.947 net/enic: not in enabled drivers build config 00:02:14.947 net/failsafe: not in enabled drivers build config 00:02:14.947 net/fm10k: not in enabled drivers build config 00:02:14.947 net/gve: not in enabled drivers build config 00:02:14.947 net/hinic: not in enabled drivers build config 00:02:14.947 net/hns3: not in enabled drivers build config 00:02:14.947 net/iavf: not in enabled drivers build config 00:02:14.947 net/ice: not in enabled drivers build config 00:02:14.947 net/idpf: not in enabled drivers build config 00:02:14.947 net/igc: not in enabled drivers build config 00:02:14.947 net/ionic: not in enabled drivers build config 00:02:14.947 net/ipn3ke: not in enabled drivers build config 00:02:14.947 net/ixgbe: not in enabled drivers build config 00:02:14.947 net/kni: not in enabled drivers build config 00:02:14.947 net/liquidio: not in enabled drivers build config 00:02:14.947 net/mana: not in enabled drivers build config 00:02:14.947 net/memif: not in enabled drivers build config 00:02:14.947 net/mlx4: not in enabled drivers build config 00:02:14.947 net/mlx5: not in enabled drivers build config 00:02:14.947 net/mvneta: not in enabled drivers build config 00:02:14.947 net/mvpp2: not in enabled drivers build config 00:02:14.947 net/netvsc: not in enabled drivers build config 00:02:14.947 net/nfb: not in enabled drivers build config 00:02:14.947 net/nfp: not in enabled drivers build config 00:02:14.947 net/ngbe: not in enabled drivers build config 00:02:14.947 net/null: not in enabled drivers build config 00:02:14.947 net/octeontx: not in enabled drivers build config 00:02:14.947 net/octeon_ep: not in enabled drivers build config 00:02:14.947 net/pcap: not in enabled drivers build config 00:02:14.947 net/pfe: not in enabled drivers build config 00:02:14.947 net/qede: not in enabled drivers build config 00:02:14.947 net/ring: not in enabled drivers build config 00:02:14.947 net/sfc: not in enabled drivers build config 00:02:14.947 net/softnic: not in enabled drivers build config 00:02:14.947 net/tap: not in enabled drivers build config 00:02:14.947 net/thunderx: not in enabled drivers build config 00:02:14.947 net/txgbe: not in enabled drivers build config 00:02:14.947 net/vdev_netvsc: not in enabled drivers build config 00:02:14.947 net/vhost: not in enabled drivers build config 00:02:14.947 net/virtio: not in enabled drivers build config 00:02:14.947 net/vmxnet3: not in enabled drivers build config 00:02:14.947 raw/cnxk_bphy: not in enabled drivers build config 00:02:14.947 raw/cnxk_gpio: not in enabled drivers build config 00:02:14.947 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:14.947 raw/ifpga: not in enabled drivers build config 00:02:14.947 raw/ntb: not in enabled drivers build config 00:02:14.947 raw/skeleton: not in enabled drivers build config 00:02:14.947 crypto/armv8: not in enabled drivers build config 00:02:14.947 crypto/bcmfs: not in enabled drivers build config 00:02:14.947 crypto/caam_jr: not in enabled drivers build config 00:02:14.947 crypto/ccp: not in enabled drivers build config 00:02:14.947 crypto/cnxk: not in enabled drivers build config 00:02:14.947 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.947 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.947 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.947 crypto/mlx5: not in enabled drivers build config 00:02:14.947 crypto/mvsam: not in enabled drivers build config 00:02:14.947 crypto/nitrox: not in enabled drivers build config 00:02:14.947 crypto/null: not in enabled drivers build config 00:02:14.947 crypto/octeontx: not in enabled drivers build config 00:02:14.947 crypto/openssl: not in enabled drivers build config 00:02:14.947 crypto/scheduler: not in enabled drivers build config 00:02:14.947 crypto/uadk: not in enabled drivers build config 00:02:14.947 crypto/virtio: not in enabled drivers build config 00:02:14.947 compress/isal: not in enabled drivers build config 00:02:14.948 compress/mlx5: not in enabled drivers build config 00:02:14.948 compress/octeontx: not in enabled drivers build config 00:02:14.948 compress/zlib: not in enabled drivers build config 00:02:14.948 regex/mlx5: not in enabled drivers build config 00:02:14.948 regex/cn9k: not in enabled drivers build config 00:02:14.948 vdpa/ifc: not in enabled drivers build config 00:02:14.948 vdpa/mlx5: not in enabled drivers build config 00:02:14.948 vdpa/sfc: not in enabled drivers build config 00:02:14.948 event/cnxk: not in enabled drivers build config 00:02:14.948 event/dlb2: not in enabled drivers build config 00:02:14.948 event/dpaa: not in enabled drivers build config 00:02:14.948 event/dpaa2: not in enabled drivers build config 00:02:14.948 event/dsw: not in enabled drivers build config 00:02:14.948 event/opdl: not in enabled drivers build config 00:02:14.948 event/skeleton: not in enabled drivers build config 00:02:14.948 event/sw: not in enabled drivers build config 00:02:14.948 event/octeontx: not in enabled drivers build config 00:02:14.948 baseband/acc: not in enabled drivers build config 00:02:14.948 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:14.948 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:14.948 baseband/la12xx: not in enabled drivers build config 00:02:14.948 baseband/null: not in enabled drivers build config 00:02:14.948 baseband/turbo_sw: not in enabled drivers build config 00:02:14.948 gpu/cuda: not in enabled drivers build config 00:02:14.948 00:02:14.948 00:02:14.948 Build targets in project: 311 00:02:14.948 00:02:14.948 DPDK 22.11.4 00:02:14.948 00:02:14.948 User defined options 00:02:14.948 libdir : lib 00:02:14.948 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:14.948 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:14.948 c_link_args : 00:02:14.948 enable_docs : false 00:02:14.948 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:14.948 enable_kmods : false 00:02:14.948 machine : native 00:02:14.948 tests : false 00:02:14.948 00:02:14.948 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.948 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:15.210 09:36:43 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:02:15.210 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:15.210 [1/740] Generating lib/rte_telemetry_def with a custom command 00:02:15.210 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:15.210 [3/740] Generating lib/rte_kvargs_def with a custom command 00:02:15.210 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:15.210 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.210 [6/740] Generating lib/rte_ring_def with a custom command 00:02:15.210 [7/740] Generating lib/rte_eal_mingw with a custom command 00:02:15.210 [8/740] Generating lib/rte_rcu_mingw with a custom command 00:02:15.210 [9/740] Generating lib/rte_mbuf_def with a custom command 00:02:15.210 [10/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:15.474 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.474 [12/740] Generating lib/rte_mempool_def with a custom command 00:02:15.474 [13/740] Generating lib/rte_mempool_mingw with a custom command 00:02:15.474 [14/740] Generating lib/rte_rcu_def with a custom command 00:02:15.474 [15/740] Generating lib/rte_eal_def with a custom command 00:02:15.474 [16/740] Generating lib/rte_ring_mingw with a custom command 00:02:15.474 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.474 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.474 [19/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.474 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.474 [21/740] Generating lib/rte_net_def with a custom command 00:02:15.474 [22/740] Generating lib/rte_meter_def with a custom command 00:02:15.474 [23/740] Generating lib/rte_meter_mingw with a custom command 00:02:15.474 [24/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.474 [25/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.474 [26/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.474 [27/740] Generating lib/rte_net_mingw with a custom command 00:02:15.474 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.474 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.474 [30/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:15.474 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.474 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.474 [33/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.474 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.474 [35/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:15.474 [36/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.474 [37/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.474 [38/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:15.474 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.474 [40/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.474 [41/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:15.474 [42/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.474 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.474 [44/740] Generating lib/rte_pci_mingw with a custom command 00:02:15.474 [45/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.474 [46/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.474 [47/740] Generating lib/rte_ethdev_def with a custom command 00:02:15.474 [48/740] Generating lib/rte_pci_def with a custom command 00:02:15.474 [49/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.474 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.474 [51/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.474 [52/740] Linking static target lib/librte_kvargs.a 00:02:15.474 [53/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.474 [54/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:15.474 [55/740] Generating lib/rte_cmdline_def with a custom command 00:02:15.474 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.474 [57/740] Generating lib/rte_metrics_mingw with a custom command 00:02:15.474 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.474 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.474 [60/740] Generating lib/rte_metrics_def with a custom command 00:02:15.474 [61/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.474 [62/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:15.474 [63/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.474 [64/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.474 [65/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.474 [66/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.474 [67/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.474 [68/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:15.474 [69/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:15.474 [70/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:15.474 [71/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:15.474 [72/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.474 [73/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.474 [74/740] Generating lib/rte_hash_def with a custom command 00:02:15.474 [75/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.474 [76/740] Generating lib/rte_timer_def with a custom command 00:02:15.474 [77/740] Generating lib/rte_hash_mingw with a custom command 00:02:15.738 [78/740] Linking static target lib/librte_meter.a 00:02:15.738 [79/740] Linking static target lib/librte_ring.a 00:02:15.738 [80/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:15.738 [81/740] Generating lib/rte_timer_mingw with a custom command 00:02:15.738 [82/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.738 [83/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.738 [84/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.738 [85/740] Linking static target lib/librte_pci.a 00:02:15.738 [86/740] Generating lib/rte_acl_def with a custom command 00:02:15.738 [87/740] Generating lib/rte_acl_mingw with a custom command 00:02:15.738 [88/740] Generating lib/rte_bbdev_def with a custom command 00:02:15.738 [89/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:15.738 [90/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:15.738 [91/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.738 [92/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.738 [93/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:15.738 [94/740] Generating lib/rte_bitratestats_def with a custom command 00:02:15.738 [95/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:15.738 [96/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:15.738 [97/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.738 [98/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.738 [99/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.738 [100/740] Generating lib/rte_bpf_def with a custom command 00:02:15.738 [101/740] Generating lib/rte_cfgfile_def with a custom command 00:02:15.738 [102/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.738 [103/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:15.738 [104/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.738 [105/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:15.738 [106/740] Generating lib/rte_bpf_mingw with a custom command 00:02:15.738 [107/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.738 [108/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:15.738 [109/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.738 [110/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.738 [111/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.738 [112/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.738 [113/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.738 [114/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:15.738 [115/740] Generating lib/rte_compressdev_def with a custom command 00:02:15.738 [116/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:15.738 [117/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:15.738 [118/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:15.738 [119/740] Generating lib/rte_cryptodev_def with a custom command 00:02:15.738 [120/740] Generating lib/rte_distributor_mingw with a custom command 00:02:15.738 [121/740] Generating lib/rte_distributor_def with a custom command 00:02:15.738 [122/740] Generating lib/rte_efd_def with a custom command 00:02:15.738 [123/740] Generating lib/rte_efd_mingw with a custom command 00:02:15.738 [124/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:15.738 [125/740] Generating lib/rte_eventdev_def with a custom command 00:02:15.738 [126/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.738 [127/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:15.738 [128/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.998 [129/740] Generating lib/rte_gpudev_def with a custom command 00:02:15.998 [130/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:15.998 [131/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:15.998 [132/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.998 [133/740] Linking target lib/librte_kvargs.so.23.0 00:02:15.998 [134/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.998 [135/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.998 [136/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.998 [137/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.998 [138/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.998 [139/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.998 [140/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.998 [141/740] Generating lib/rte_gro_def with a custom command 00:02:15.998 [142/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.998 [143/740] Generating lib/rte_gro_mingw with a custom command 00:02:15.998 [144/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.998 [145/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.998 [146/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:15.998 [147/740] Generating lib/rte_gso_def with a custom command 00:02:15.998 [148/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:15.998 [149/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.998 [150/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:15.998 [151/740] Generating lib/rte_gso_mingw with a custom command 00:02:15.998 [152/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:15.998 [153/740] Linking static target lib/librte_cfgfile.a 00:02:15.998 [154/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.998 [155/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:15.998 [156/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.998 [157/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.998 [158/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:15.998 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:15.998 [160/740] Generating lib/rte_ip_frag_def with a custom command 00:02:15.998 [161/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:15.998 [162/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.998 [163/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.998 [164/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:15.998 [165/740] Linking static target lib/librte_cmdline.a 00:02:16.259 [166/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:16.259 [167/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:16.259 [168/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:16.259 [169/740] Generating lib/rte_jobstats_def with a custom command 00:02:16.259 [170/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:16.259 [171/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:16.259 [172/740] Generating lib/rte_latencystats_def with a custom command 00:02:16.259 [173/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:16.259 [174/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.259 [175/740] Generating lib/rte_lpm_def with a custom command 00:02:16.259 [176/740] Linking static target lib/librte_metrics.a 00:02:16.259 [177/740] Generating lib/rte_lpm_mingw with a custom command 00:02:16.259 [178/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.259 [179/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.259 [180/740] Generating lib/rte_member_def with a custom command 00:02:16.259 [181/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.259 [182/740] Generating lib/rte_member_mingw with a custom command 00:02:16.259 [183/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.259 [184/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:16.259 [185/740] Linking static target lib/librte_timer.a 00:02:16.259 [186/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:16.259 [187/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:16.259 [188/740] Linking static target lib/librte_telemetry.a 00:02:16.259 [189/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.259 [190/740] Generating lib/rte_pcapng_def with a custom command 00:02:16.259 [191/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:16.259 [192/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:16.259 [193/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.259 [194/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:16.259 [195/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:16.259 [196/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:16.259 [197/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.259 [198/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.259 [199/740] Linking static target lib/librte_bitratestats.a 00:02:16.259 [200/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:16.259 [201/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:16.259 [202/740] Linking static target lib/librte_jobstats.a 00:02:16.259 [203/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:16.259 [204/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:16.259 [205/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.259 [206/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:16.259 [207/740] Generating lib/rte_power_mingw with a custom command 00:02:16.259 [208/740] Generating lib/rte_power_def with a custom command 00:02:16.259 [209/740] Linking static target lib/librte_net.a 00:02:16.259 [210/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:16.259 [211/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:16.259 [212/740] Generating lib/rte_rawdev_def with a custom command 00:02:16.259 [213/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:16.259 [214/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:16.259 [215/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:16.259 [216/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:16.259 [217/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:16.259 [218/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:16.259 [219/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:16.259 [220/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:16.259 [221/740] Generating lib/rte_regexdev_def with a custom command 00:02:16.259 [222/740] Generating lib/rte_dmadev_def with a custom command 00:02:16.259 [223/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:16.522 [224/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:16.522 [225/740] Generating lib/rte_rib_def with a custom command 00:02:16.522 [226/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:16.522 [227/740] Generating lib/rte_rib_mingw with a custom command 00:02:16.522 [228/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:16.522 [229/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:16.522 [230/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:16.522 [231/740] Generating lib/rte_reorder_def with a custom command 00:02:16.522 [232/740] Generating lib/rte_reorder_mingw with a custom command 00:02:16.522 [233/740] Generating lib/rte_sched_mingw with a custom command 00:02:16.522 [234/740] Generating lib/rte_sched_def with a custom command 00:02:16.522 [235/740] Generating lib/rte_security_mingw with a custom command 00:02:16.522 [236/740] Generating lib/rte_security_def with a custom command 00:02:16.522 [237/740] Generating lib/rte_stack_def with a custom command 00:02:16.522 [238/740] Generating lib/rte_stack_mingw with a custom command 00:02:16.522 [239/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.522 [240/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:16.522 [241/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:16.522 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:16.522 [243/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:16.522 [244/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:16.522 [245/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:16.522 [246/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:16.522 [247/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:16.522 [248/740] Generating lib/rte_vhost_def with a custom command 00:02:16.522 [249/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:16.522 [250/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.522 [251/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:16.522 [252/740] Generating lib/rte_vhost_mingw with a custom command 00:02:16.522 [253/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:16.522 [254/740] Linking static target lib/librte_compressdev.a 00:02:16.522 [255/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:16.522 [256/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:16.522 [257/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:16.523 [258/740] Linking static target lib/librte_stack.a 00:02:16.523 [259/740] Linking static target lib/librte_rcu.a 00:02:16.523 [260/740] Generating lib/rte_ipsec_def with a custom command 00:02:16.523 [261/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:16.523 [262/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:16.523 [263/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.523 [264/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:16.523 [265/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:16.523 [266/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:16.523 [267/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:16.523 [268/740] Linking static target lib/librte_mempool.a 00:02:16.523 [269/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:16.523 [270/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:16.792 [271/740] Generating lib/rte_fib_def with a custom command 00:02:16.792 [272/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:16.792 [273/740] Generating lib/rte_fib_mingw with a custom command 00:02:16.792 [274/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.792 [275/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:16.792 [276/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.792 [277/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.792 [278/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:16.792 [279/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.792 [280/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:16.792 [281/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.792 [282/740] Linking static target lib/librte_bbdev.a 00:02:16.792 [283/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.792 [284/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:16.792 [285/740] Generating lib/rte_port_mingw with a custom command 00:02:16.792 [286/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:16.792 [287/740] Generating lib/rte_port_def with a custom command 00:02:16.792 [288/740] Linking target lib/librte_telemetry.so.23.0 00:02:16.792 [289/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:16.792 [290/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:16.792 [291/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.792 [292/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:16.792 [293/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:16.792 [294/740] Linking static target lib/librte_rawdev.a 00:02:16.792 [295/740] Generating lib/rte_pdump_def with a custom command 00:02:16.792 [296/740] Generating lib/rte_pdump_mingw with a custom command 00:02:16.792 [297/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:16.792 [298/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.792 [299/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:16.792 [300/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:16.792 [301/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:16.792 [302/740] Linking static target lib/librte_gpudev.a 00:02:16.792 [303/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:16.792 [304/740] Linking static target lib/librte_dmadev.a 00:02:16.792 [305/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:16.792 [306/740] Linking static target lib/librte_gro.a 00:02:17.052 [307/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:17.052 [308/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:17.052 [309/740] Linking static target lib/librte_distributor.a 00:02:17.052 [310/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:17.052 [311/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:17.052 [312/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:17.052 [313/740] Linking static target lib/librte_latencystats.a 00:02:17.052 [314/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:17.052 [315/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:17.052 [316/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:17.052 [317/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:17.052 [318/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:17.052 [319/740] Linking static target lib/librte_gso.a 00:02:17.052 [320/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.052 [321/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:17.052 [322/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:17.052 [323/740] Generating lib/rte_table_def with a custom command 00:02:17.052 [324/740] Generating lib/rte_table_mingw with a custom command 00:02:17.052 [325/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:17.052 [326/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:17.052 [327/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:17.052 [328/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:17.052 [329/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:17.052 [330/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:17.052 [331/740] Linking static target lib/librte_eal.a 00:02:17.052 [332/740] Generating lib/rte_pipeline_def with a custom command 00:02:17.052 [333/740] Linking static target lib/librte_regexdev.a 00:02:17.310 [334/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:17.310 [335/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:17.310 [336/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:17.310 [337/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:17.310 [338/740] Linking static target lib/librte_ip_frag.a 00:02:17.310 [339/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:17.310 [340/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:17.310 [341/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.310 [342/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:17.310 [343/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:17.310 [344/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:17.310 [345/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:17.310 [346/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:17.310 [347/740] Linking static target lib/librte_power.a 00:02:17.310 [348/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.310 [349/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:17.310 [350/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:17.310 [351/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:17.310 [352/740] Linking static target lib/librte_mbuf.a 00:02:17.310 [353/740] Generating lib/rte_graph_def with a custom command 00:02:17.310 [354/740] Linking static target lib/librte_reorder.a 00:02:17.310 [355/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.310 [356/740] Generating lib/rte_graph_mingw with a custom command 00:02:17.310 [357/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:17.310 [358/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.310 [359/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:17.310 [360/740] Linking static target lib/librte_pcapng.a 00:02:17.310 [361/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:17.310 [362/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:17.310 [363/740] Generating lib/rte_node_def with a custom command 00:02:17.310 [364/740] Linking static target lib/librte_bpf.a 00:02:17.310 [365/740] Linking static target lib/librte_security.a 00:02:17.310 [366/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:17.310 [367/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:17.573 [368/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:17.573 [369/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:17.573 [370/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.573 [371/740] Generating lib/rte_node_mingw with a custom command 00:02:17.573 [372/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.573 [373/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:17.573 [374/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:17.573 [375/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:17.573 [376/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:17.573 [377/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:17.573 [378/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:17.573 [379/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.573 [380/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:17.573 [381/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:17.573 [382/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:17.573 [383/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.573 [384/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:17.573 [385/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:17.573 [386/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:17.573 [387/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:17.573 [388/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:17.573 [389/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:17.573 [390/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.573 [391/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:17.573 [392/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:17.573 [393/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:17.573 [394/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.573 [395/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:17.573 [396/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.833 [397/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:17.834 [398/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:17.834 [399/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:17.834 [400/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:17.834 [401/740] Linking static target lib/librte_rib.a 00:02:17.834 [402/740] Linking static target lib/librte_lpm.a 00:02:17.834 [403/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:17.834 [404/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:17.834 [405/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:17.834 [406/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.834 [407/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:17.834 [408/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.834 [409/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:17.834 [410/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:17.834 [411/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:17.834 [412/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:17.834 [413/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:17.834 [414/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:17.834 [415/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:17.834 [416/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:17.834 [417/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:17.834 [418/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:17.834 [419/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.834 [420/740] Linking static target lib/librte_efd.a 00:02:17.834 [421/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:17.834 [422/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:17.834 [423/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:17.834 [424/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:17.834 [425/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:17.834 [426/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:17.834 [427/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:17.834 [428/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:17.834 [429/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:18.097 [430/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:18.097 [431/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:18.097 [432/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:18.097 [433/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:18.097 [434/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.097 [435/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:18.097 [436/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:18.097 [437/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:18.097 [438/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:18.097 [439/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:18.097 [440/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:18.097 [441/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.097 [442/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:18.097 [443/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:18.097 [444/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:18.097 [445/740] Linking static target lib/librte_graph.a 00:02:18.097 [446/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.097 [447/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:18.097 [448/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:18.097 [449/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.357 [450/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.357 [451/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:18.357 [452/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:18.357 [453/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:18.357 [454/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.357 [455/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.357 [456/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:18.357 [457/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:18.357 [458/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:18.357 [459/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:18.357 [460/740] Linking static target lib/librte_fib.a 00:02:18.357 [461/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.357 [462/740] Linking static target drivers/librte_bus_vdev.a 00:02:18.357 [463/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.357 [464/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.357 [465/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:18.357 [466/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:18.357 [467/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:18.357 [468/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.357 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:18.619 [470/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:18.619 [471/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:18.619 [472/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:18.619 [473/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:18.619 [474/740] Linking static target lib/librte_pdump.a 00:02:18.619 [475/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:18.619 [476/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:18.619 [477/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:18.619 [478/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:18.619 [479/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:18.619 [480/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.619 [481/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.619 [482/740] Linking static target drivers/librte_bus_pci.a 00:02:18.619 [483/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:18.619 [484/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:18.619 [485/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.619 [486/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.887 [487/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:18.887 [488/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:18.887 [489/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:18.887 [490/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:18.887 [491/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.887 [492/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:18.887 [493/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:18.887 [494/740] Linking static target lib/librte_table.a 00:02:18.887 [495/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:18.887 [496/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:18.887 [497/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:18.887 [498/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:18.887 [499/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:19.148 [500/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:19.148 [501/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:19.148 [502/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.148 [503/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.148 [504/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:19.148 [505/740] Linking static target lib/librte_cryptodev.a 00:02:19.148 [506/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:19.148 [507/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:19.148 [508/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:19.148 [509/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:19.148 [510/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:19.148 [511/740] Linking static target lib/librte_sched.a 00:02:19.148 [512/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:19.148 [513/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:19.148 [514/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:19.148 [515/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:19.148 [516/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:19.148 [517/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:19.148 [518/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:19.148 [519/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:19.148 [520/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:19.148 [521/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:19.148 [522/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:19.148 [523/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:19.407 [524/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.407 [525/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:19.407 [526/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:19.407 [527/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:19.407 [528/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:19.407 [529/740] Linking static target lib/librte_node.a 00:02:19.407 [530/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:19.407 [531/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:19.407 [532/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:19.407 [533/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:19.407 [534/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:19.407 [535/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:19.407 [536/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:19.407 [537/740] Linking static target lib/librte_ethdev.a 00:02:19.407 [538/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:19.407 [539/740] Linking static target lib/librte_ipsec.a 00:02:19.407 [540/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.407 [541/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:19.407 [542/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:19.407 [543/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:19.407 [544/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:19.407 [545/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:19.407 [546/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:19.407 [547/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:19.665 [548/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:19.665 [549/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:19.665 [550/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:19.665 [551/740] Linking static target lib/librte_member.a 00:02:19.665 [552/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.665 [553/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.665 [554/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:19.665 [555/740] Linking static target drivers/librte_mempool_ring.a 00:02:19.665 [556/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.665 [557/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:19.665 [558/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:19.665 [559/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.665 [560/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:19.665 [561/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:19.665 [562/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.665 [563/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:19.665 [564/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:19.665 [565/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:19.665 [566/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:19.665 [567/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:19.665 [568/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:19.665 [569/740] Linking static target lib/librte_eventdev.a 00:02:19.665 [570/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:19.665 [571/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.665 [572/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:19.666 [573/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:19.666 [574/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:19.666 [575/740] Linking static target lib/librte_port.a 00:02:19.924 [576/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:19.924 [577/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:19.924 [578/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:19.924 [579/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:19.924 [580/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:19.924 [581/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:19.924 [582/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:19.924 [583/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:19.924 [584/740] Linking static target lib/librte_hash.a 00:02:19.924 [585/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:19.924 [586/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:19.924 [587/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:19.924 [588/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:19.924 [589/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:19.924 [590/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:19.924 [591/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.924 [592/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:19.924 [593/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:19.924 [594/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:19.924 [595/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:19.924 [596/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:19.924 [597/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:20.184 [598/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:20.184 [599/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:20.184 [600/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:20.184 [601/740] Linking static target lib/librte_acl.a 00:02:20.184 [602/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:20.184 [603/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:20.184 [604/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:20.443 [605/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:20.443 [606/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:20.443 [607/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:20.443 [608/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:20.443 [609/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:20.443 [610/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:20.443 [611/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:20.443 [612/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.443 [613/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.701 [614/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:20.960 [615/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.960 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:21.219 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:21.219 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:21.785 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:21.785 [620/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:22.044 [621/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.044 [622/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:22.302 [623/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.561 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:22.561 [625/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:22.820 [626/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:22.820 [627/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:22.820 [628/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:22.820 [629/740] Linking static target drivers/librte_net_i40e.a 00:02:23.388 [630/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.388 [631/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:23.647 [632/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:23.906 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.448 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.357 [635/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.357 [636/740] Linking target lib/librte_eal.so.23.0 00:02:28.357 [637/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:28.615 [638/740] Linking target lib/librte_meter.so.23.0 00:02:28.615 [639/740] Linking target lib/librte_timer.so.23.0 00:02:28.615 [640/740] Linking target lib/librte_cfgfile.so.23.0 00:02:28.615 [641/740] Linking target lib/librte_ring.so.23.0 00:02:28.615 [642/740] Linking target lib/librte_stack.so.23.0 00:02:28.615 [643/740] Linking target lib/librte_pci.so.23.0 00:02:28.615 [644/740] Linking target lib/librte_jobstats.so.23.0 00:02:28.615 [645/740] Linking target lib/librte_rawdev.so.23.0 00:02:28.615 [646/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:28.615 [647/740] Linking target lib/librte_dmadev.so.23.0 00:02:28.615 [648/740] Linking target lib/librte_graph.so.23.0 00:02:28.615 [649/740] Linking target lib/librte_acl.so.23.0 00:02:28.615 [650/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:28.615 [651/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:28.615 [652/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:28.615 [653/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:28.615 [654/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:28.615 [655/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:28.615 [656/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:28.615 [657/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:28.615 [658/740] Linking target lib/librte_rcu.so.23.0 00:02:28.615 [659/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:28.615 [660/740] Linking target lib/librte_mempool.so.23.0 00:02:28.873 [661/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:28.873 [662/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:28.873 [663/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:28.873 [664/740] Linking target lib/librte_rib.so.23.0 00:02:28.873 [665/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:28.873 [666/740] Linking target lib/librte_mbuf.so.23.0 00:02:29.131 [667/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:29.131 [668/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:29.131 [669/740] Linking target lib/librte_fib.so.23.0 00:02:29.131 [670/740] Linking target lib/librte_gpudev.so.23.0 00:02:29.131 [671/740] Linking target lib/librte_regexdev.so.23.0 00:02:29.131 [672/740] Linking target lib/librte_bbdev.so.23.0 00:02:29.131 [673/740] Linking target lib/librte_compressdev.so.23.0 00:02:29.131 [674/740] Linking target lib/librte_net.so.23.0 00:02:29.131 [675/740] Linking target lib/librte_distributor.so.23.0 00:02:29.131 [676/740] Linking target lib/librte_reorder.so.23.0 00:02:29.131 [677/740] Linking target lib/librte_cryptodev.so.23.0 00:02:29.131 [678/740] Linking target lib/librte_sched.so.23.0 00:02:29.131 [679/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:29.131 [680/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:29.131 [681/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:29.131 [682/740] Linking target lib/librte_hash.so.23.0 00:02:29.131 [683/740] Linking target lib/librte_cmdline.so.23.0 00:02:29.131 [684/740] Linking target lib/librte_security.so.23.0 00:02:29.131 [685/740] Linking target lib/librte_ethdev.so.23.0 00:02:29.389 [686/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:29.389 [687/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:29.389 [688/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:29.389 [689/740] Linking target lib/librte_member.so.23.0 00:02:29.389 [690/740] Linking target lib/librte_efd.so.23.0 00:02:29.389 [691/740] Linking target lib/librte_lpm.so.23.0 00:02:29.389 [692/740] Linking target lib/librte_ipsec.so.23.0 00:02:29.389 [693/740] Linking target lib/librte_metrics.so.23.0 00:02:29.389 [694/740] Linking target lib/librte_ip_frag.so.23.0 00:02:29.389 [695/740] Linking target lib/librte_bpf.so.23.0 00:02:29.389 [696/740] Linking target lib/librte_power.so.23.0 00:02:29.389 [697/740] Linking target lib/librte_pcapng.so.23.0 00:02:29.389 [698/740] Linking target lib/librte_eventdev.so.23.0 00:02:29.389 [699/740] Linking target lib/librte_gro.so.23.0 00:02:29.389 [700/740] Linking target lib/librte_gso.so.23.0 00:02:29.389 [701/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:29.647 [702/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:29.647 [703/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:29.647 [704/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:29.647 [705/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:29.647 [706/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:29.647 [707/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:29.647 [708/740] Linking target lib/librte_node.so.23.0 00:02:29.647 [709/740] Linking target lib/librte_pdump.so.23.0 00:02:29.647 [710/740] Linking target lib/librte_bitratestats.so.23.0 00:02:29.647 [711/740] Linking target lib/librte_latencystats.so.23.0 00:02:29.647 [712/740] Linking target lib/librte_port.so.23.0 00:02:29.647 [713/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:29.647 [714/740] Linking static target lib/librte_vhost.a 00:02:29.647 [715/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:29.905 [716/740] Linking target lib/librte_table.so.23.0 00:02:29.905 [717/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:30.840 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:30.840 [719/740] Linking static target lib/librte_pipeline.a 00:02:31.099 [720/740] Linking target app/dpdk-test-cmdline 00:02:31.099 [721/740] Linking target app/dpdk-test-compress-perf 00:02:31.099 [722/740] Linking target app/dpdk-test-fib 00:02:31.099 [723/740] Linking target app/dpdk-proc-info 00:02:31.099 [724/740] Linking target app/dpdk-pdump 00:02:31.099 [725/740] Linking target app/dpdk-test-regex 00:02:31.099 [726/740] Linking target app/dpdk-test-acl 00:02:31.099 [727/740] Linking target app/dpdk-dumpcap 00:02:31.099 [728/740] Linking target app/dpdk-test-gpudev 00:02:31.099 [729/740] Linking target app/dpdk-test-eventdev 00:02:31.099 [730/740] Linking target app/dpdk-test-sad 00:02:31.099 [731/740] Linking target app/dpdk-test-security-perf 00:02:31.099 [732/740] Linking target app/dpdk-test-pipeline 00:02:31.099 [733/740] Linking target app/dpdk-test-bbdev 00:02:31.099 [734/740] Linking target app/dpdk-test-flow-perf 00:02:31.099 [735/740] Linking target app/dpdk-test-crypto-perf 00:02:31.099 [736/740] Linking target app/dpdk-testpmd 00:02:31.358 [737/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.616 [738/740] Linking target lib/librte_vhost.so.23.0 00:02:34.914 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.175 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:35.175 09:37:03 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:35.175 09:37:03 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:35.175 09:37:03 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:35.175 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:35.175 [0/1] Installing files. 00:02:35.439 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:35.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:35.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:35.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:35.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:35.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:35.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:35.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:35.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:35.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.444 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.445 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:35.446 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:35.447 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:35.447 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:35.447 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:35.447 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.447 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.448 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.712 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.712 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.712 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.712 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.712 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:35.712 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.712 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:35.712 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.712 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:35.712 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.712 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:35.712 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.712 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.713 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.714 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.715 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.716 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:35.717 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:35.717 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:35.717 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:35.717 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:35.717 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:35.717 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:35.717 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:35.717 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:35.717 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:35.717 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:35.717 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:35.717 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:35.717 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:35.717 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:35.717 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:35.717 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:35.717 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:35.717 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:35.717 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:35.717 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:35.717 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:35.717 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:35.717 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:35.717 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:35.717 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:35.717 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:35.717 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:35.717 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:35.717 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:35.717 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:35.717 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:35.717 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:35.717 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:35.717 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:35.717 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:35.717 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:35.717 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:35.717 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:35.717 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:35.717 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:35.717 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:35.717 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:35.717 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:35.717 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:35.718 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:35.718 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:35.718 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:35.718 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:35.718 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:35.718 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:35.718 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:35.718 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:35.718 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:35.718 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:35.718 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:35.718 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:35.718 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:35.718 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:35.718 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:35.718 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:35.718 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:35.718 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:35.718 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:35.718 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:35.718 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:35.718 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:35.718 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:35.718 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:35.718 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:35.718 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:35.718 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:35.718 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:35.718 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:35.718 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:35.718 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:35.718 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:35.718 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:35.718 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:35.718 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:35.718 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:35.718 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:35.718 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:35.718 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:35.718 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:35.718 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:35.718 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:35.718 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:35.718 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:35.718 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:35.718 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:35.718 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:35.718 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:35.718 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:35.718 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:35.718 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:35.718 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:35.718 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:35.718 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:35.718 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:35.718 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:35.718 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:35.718 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:35.718 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:35.718 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:35.718 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:35.718 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:35.719 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:35.719 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:35.719 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:35.719 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:35.719 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:35.719 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:35.719 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:35.719 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:35.719 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:35.719 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:35.719 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:35.719 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:35.719 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:35.719 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:35.719 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:35.719 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:35.719 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:35.719 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:35.719 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:35.719 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:35.719 09:37:04 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:35.719 09:37:04 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.719 00:02:35.719 real 0m26.453s 00:02:35.719 user 6m56.223s 00:02:35.719 sys 1m44.481s 00:02:35.719 09:37:04 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:35.719 09:37:04 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:35.719 ************************************ 00:02:35.719 END TEST build_native_dpdk 00:02:35.719 ************************************ 00:02:35.979 09:37:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:35.979 09:37:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:35.979 09:37:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:35.979 09:37:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:35.979 09:37:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:35.979 09:37:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:35.979 09:37:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:35.979 09:37:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:35.979 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:36.238 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.238 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.238 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:36.501 Using 'verbs' RDMA provider 00:02:49.295 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:01.518 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:01.518 Creating mk/config.mk...done. 00:03:01.518 Creating mk/cc.flags.mk...done. 00:03:01.518 Type 'make' to build. 00:03:01.518 09:37:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:01.518 09:37:29 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:01.518 09:37:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:01.518 09:37:29 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.518 ************************************ 00:03:01.518 START TEST make 00:03:01.518 ************************************ 00:03:01.518 09:37:29 make -- common/autotest_common.sh@1125 -- $ make -j96 00:03:01.518 make[1]: Nothing to be done for 'all'. 00:03:02.476 The Meson build system 00:03:02.476 Version: 1.5.0 00:03:02.476 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:02.476 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:02.476 Build type: native build 00:03:02.476 Project name: libvfio-user 00:03:02.476 Project version: 0.0.1 00:03:02.476 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:02.476 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:02.476 Host machine cpu family: x86_64 00:03:02.476 Host machine cpu: x86_64 00:03:02.476 Run-time dependency threads found: YES 00:03:02.476 Library dl found: YES 00:03:02.476 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:02.476 Run-time dependency json-c found: YES 0.17 00:03:02.476 Run-time dependency cmocka found: YES 1.1.7 00:03:02.476 Program pytest-3 found: NO 00:03:02.476 Program flake8 found: NO 00:03:02.476 Program misspell-fixer found: NO 00:03:02.476 Program restructuredtext-lint found: NO 00:03:02.476 Program valgrind found: YES (/usr/bin/valgrind) 00:03:02.476 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:02.476 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:02.476 Compiler for C supports arguments -Wwrite-strings: YES 00:03:02.476 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:02.476 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:02.476 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:02.476 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:02.476 Build targets in project: 8 00:03:02.476 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:02.476 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:02.476 00:03:02.476 libvfio-user 0.0.1 00:03:02.476 00:03:02.476 User defined options 00:03:02.476 buildtype : debug 00:03:02.476 default_library: shared 00:03:02.476 libdir : /usr/local/lib 00:03:02.476 00:03:02.476 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:03.113 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:03.113 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:03.113 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:03.113 [3/37] Compiling C object samples/null.p/null.c.o 00:03:03.113 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:03.113 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:03.113 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:03.113 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:03.113 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:03.113 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:03.113 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:03.113 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:03.113 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:03.113 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:03.113 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:03.113 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:03.113 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:03.113 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:03.113 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:03.113 [19/37] Compiling C object samples/server.p/server.c.o 00:03:03.113 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:03.113 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:03.113 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:03.113 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:03.113 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:03.113 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:03.113 [26/37] Compiling C object samples/client.p/client.c.o 00:03:03.113 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:03.113 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:03.113 [29/37] Linking target samples/client 00:03:03.113 [30/37] Linking target test/unit_tests 00:03:03.113 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:03.396 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:03.396 [33/37] Linking target samples/shadow_ioeventfd_server 00:03:03.396 [34/37] Linking target samples/server 00:03:03.396 [35/37] Linking target samples/null 00:03:03.396 [36/37] Linking target samples/lspci 00:03:03.396 [37/37] Linking target samples/gpio-pci-idio-16 00:03:03.396 INFO: autodetecting backend as ninja 00:03:03.397 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:03.397 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:03.998 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:03.998 ninja: no work to do. 00:03:30.549 CC lib/ut/ut.o 00:03:30.549 CC lib/log/log.o 00:03:30.549 CC lib/log/log_flags.o 00:03:30.549 CC lib/log/log_deprecated.o 00:03:30.549 CC lib/ut_mock/mock.o 00:03:30.549 LIB libspdk_ut.a 00:03:30.549 LIB libspdk_log.a 00:03:30.549 LIB libspdk_ut_mock.a 00:03:30.549 SO libspdk_log.so.7.0 00:03:30.549 SO libspdk_ut.so.2.0 00:03:30.549 SO libspdk_ut_mock.so.6.0 00:03:30.549 SYMLINK libspdk_ut.so 00:03:30.549 SYMLINK libspdk_log.so 00:03:30.549 SYMLINK libspdk_ut_mock.so 00:03:30.808 CXX lib/trace_parser/trace.o 00:03:30.808 CC lib/ioat/ioat.o 00:03:30.808 CC lib/dma/dma.o 00:03:30.808 CC lib/util/base64.o 00:03:30.808 CC lib/util/bit_array.o 00:03:30.808 CC lib/util/cpuset.o 00:03:30.808 CC lib/util/crc32c.o 00:03:30.808 CC lib/util/crc16.o 00:03:30.808 CC lib/util/crc32.o 00:03:30.808 CC lib/util/dif.o 00:03:30.808 CC lib/util/crc32_ieee.o 00:03:30.808 CC lib/util/crc64.o 00:03:30.808 CC lib/util/fd.o 00:03:30.808 CC lib/util/fd_group.o 00:03:30.808 CC lib/util/file.o 00:03:30.808 CC lib/util/hexlify.o 00:03:30.808 CC lib/util/math.o 00:03:30.808 CC lib/util/iov.o 00:03:30.808 CC lib/util/strerror_tls.o 00:03:30.808 CC lib/util/net.o 00:03:30.808 CC lib/util/pipe.o 00:03:30.808 CC lib/util/string.o 00:03:30.808 CC lib/util/uuid.o 00:03:30.808 CC lib/util/xor.o 00:03:30.808 CC lib/util/zipf.o 00:03:30.808 CC lib/util/md5.o 00:03:31.067 CC lib/vfio_user/host/vfio_user.o 00:03:31.067 CC lib/vfio_user/host/vfio_user_pci.o 00:03:31.067 LIB libspdk_dma.a 00:03:31.067 SO libspdk_dma.so.5.0 00:03:31.067 LIB libspdk_ioat.a 00:03:31.067 SO libspdk_ioat.so.7.0 00:03:31.067 SYMLINK libspdk_dma.so 00:03:31.067 SYMLINK libspdk_ioat.so 00:03:31.067 LIB libspdk_vfio_user.a 00:03:31.067 SO libspdk_vfio_user.so.5.0 00:03:31.328 LIB libspdk_util.a 00:03:31.328 SYMLINK libspdk_vfio_user.so 00:03:31.328 SO libspdk_util.so.10.0 00:03:31.328 SYMLINK libspdk_util.so 00:03:31.328 LIB libspdk_trace_parser.a 00:03:31.587 SO libspdk_trace_parser.so.6.0 00:03:31.587 SYMLINK libspdk_trace_parser.so 00:03:31.587 CC lib/conf/conf.o 00:03:31.587 CC lib/rdma_utils/rdma_utils.o 00:03:31.587 CC lib/idxd/idxd_user.o 00:03:31.587 CC lib/idxd/idxd.o 00:03:31.587 CC lib/idxd/idxd_kernel.o 00:03:31.587 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:31.587 CC lib/rdma_provider/common.o 00:03:31.587 CC lib/vmd/vmd.o 00:03:31.587 CC lib/vmd/led.o 00:03:31.587 CC lib/env_dpdk/env.o 00:03:31.587 CC lib/env_dpdk/pci.o 00:03:31.587 CC lib/env_dpdk/memory.o 00:03:31.587 CC lib/env_dpdk/init.o 00:03:31.587 CC lib/env_dpdk/threads.o 00:03:31.587 CC lib/json/json_util.o 00:03:31.587 CC lib/env_dpdk/pci_ioat.o 00:03:31.587 CC lib/json/json_parse.o 00:03:31.587 CC lib/env_dpdk/pci_virtio.o 00:03:31.587 CC lib/json/json_write.o 00:03:31.587 CC lib/env_dpdk/pci_vmd.o 00:03:31.587 CC lib/env_dpdk/pci_idxd.o 00:03:31.587 CC lib/env_dpdk/pci_event.o 00:03:31.587 CC lib/env_dpdk/sigbus_handler.o 00:03:31.587 CC lib/env_dpdk/pci_dpdk.o 00:03:31.587 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:31.587 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:31.846 LIB libspdk_rdma_provider.a 00:03:31.846 LIB libspdk_conf.a 00:03:31.846 SO libspdk_rdma_provider.so.6.0 00:03:31.846 SO libspdk_conf.so.6.0 00:03:31.846 LIB libspdk_rdma_utils.a 00:03:31.846 SYMLINK libspdk_rdma_provider.so 00:03:31.846 SO libspdk_rdma_utils.so.1.0 00:03:32.105 SYMLINK libspdk_conf.so 00:03:32.105 LIB libspdk_json.a 00:03:32.105 SYMLINK libspdk_rdma_utils.so 00:03:32.105 SO libspdk_json.so.6.0 00:03:32.105 SYMLINK libspdk_json.so 00:03:32.105 LIB libspdk_idxd.a 00:03:32.105 SO libspdk_idxd.so.12.1 00:03:32.105 LIB libspdk_vmd.a 00:03:32.364 SO libspdk_vmd.so.6.0 00:03:32.364 SYMLINK libspdk_idxd.so 00:03:32.364 SYMLINK libspdk_vmd.so 00:03:32.364 CC lib/jsonrpc/jsonrpc_server.o 00:03:32.364 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:32.364 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:32.364 CC lib/jsonrpc/jsonrpc_client.o 00:03:32.623 LIB libspdk_jsonrpc.a 00:03:32.623 SO libspdk_jsonrpc.so.6.0 00:03:32.623 SYMLINK libspdk_jsonrpc.so 00:03:32.623 LIB libspdk_env_dpdk.a 00:03:32.881 SO libspdk_env_dpdk.so.15.0 00:03:32.881 SYMLINK libspdk_env_dpdk.so 00:03:32.881 CC lib/rpc/rpc.o 00:03:33.138 LIB libspdk_rpc.a 00:03:33.138 SO libspdk_rpc.so.6.0 00:03:33.138 SYMLINK libspdk_rpc.so 00:03:33.396 CC lib/notify/notify.o 00:03:33.396 CC lib/notify/notify_rpc.o 00:03:33.396 CC lib/trace/trace.o 00:03:33.396 CC lib/trace/trace_rpc.o 00:03:33.396 CC lib/trace/trace_flags.o 00:03:33.654 CC lib/keyring/keyring.o 00:03:33.654 CC lib/keyring/keyring_rpc.o 00:03:33.654 LIB libspdk_notify.a 00:03:33.654 SO libspdk_notify.so.6.0 00:03:33.654 LIB libspdk_trace.a 00:03:33.654 LIB libspdk_keyring.a 00:03:33.654 SYMLINK libspdk_notify.so 00:03:33.654 SO libspdk_keyring.so.2.0 00:03:33.654 SO libspdk_trace.so.11.0 00:03:33.913 SYMLINK libspdk_keyring.so 00:03:33.913 SYMLINK libspdk_trace.so 00:03:34.172 CC lib/thread/thread.o 00:03:34.172 CC lib/thread/iobuf.o 00:03:34.172 CC lib/sock/sock.o 00:03:34.172 CC lib/sock/sock_rpc.o 00:03:34.432 LIB libspdk_sock.a 00:03:34.432 SO libspdk_sock.so.10.0 00:03:34.432 SYMLINK libspdk_sock.so 00:03:34.691 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:34.691 CC lib/nvme/nvme_ctrlr.o 00:03:34.691 CC lib/nvme/nvme_fabric.o 00:03:34.691 CC lib/nvme/nvme_ns_cmd.o 00:03:34.691 CC lib/nvme/nvme_ns.o 00:03:34.691 CC lib/nvme/nvme_pcie_common.o 00:03:34.691 CC lib/nvme/nvme_pcie.o 00:03:34.691 CC lib/nvme/nvme_qpair.o 00:03:34.691 CC lib/nvme/nvme_quirks.o 00:03:34.691 CC lib/nvme/nvme.o 00:03:34.691 CC lib/nvme/nvme_transport.o 00:03:34.691 CC lib/nvme/nvme_discovery.o 00:03:34.691 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:34.691 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:34.691 CC lib/nvme/nvme_tcp.o 00:03:34.691 CC lib/nvme/nvme_opal.o 00:03:34.691 CC lib/nvme/nvme_io_msg.o 00:03:34.691 CC lib/nvme/nvme_poll_group.o 00:03:34.691 CC lib/nvme/nvme_zns.o 00:03:34.691 CC lib/nvme/nvme_stubs.o 00:03:34.691 CC lib/nvme/nvme_auth.o 00:03:34.691 CC lib/nvme/nvme_cuse.o 00:03:34.691 CC lib/nvme/nvme_vfio_user.o 00:03:34.691 CC lib/nvme/nvme_rdma.o 00:03:35.280 LIB libspdk_thread.a 00:03:35.280 SO libspdk_thread.so.10.1 00:03:35.280 SYMLINK libspdk_thread.so 00:03:35.540 CC lib/accel/accel.o 00:03:35.540 CC lib/accel/accel_rpc.o 00:03:35.540 CC lib/accel/accel_sw.o 00:03:35.540 CC lib/vfu_tgt/tgt_endpoint.o 00:03:35.540 CC lib/vfu_tgt/tgt_rpc.o 00:03:35.540 CC lib/init/subsystem.o 00:03:35.540 CC lib/init/json_config.o 00:03:35.540 CC lib/init/subsystem_rpc.o 00:03:35.540 CC lib/init/rpc.o 00:03:35.540 CC lib/blob/blobstore.o 00:03:35.540 CC lib/virtio/virtio.o 00:03:35.540 CC lib/virtio/virtio_vfio_user.o 00:03:35.540 CC lib/virtio/virtio_pci.o 00:03:35.540 CC lib/virtio/virtio_vhost_user.o 00:03:35.540 CC lib/blob/request.o 00:03:35.540 CC lib/blob/zeroes.o 00:03:35.540 CC lib/blob/blob_bs_dev.o 00:03:35.540 CC lib/fsdev/fsdev.o 00:03:35.540 CC lib/fsdev/fsdev_io.o 00:03:35.540 CC lib/fsdev/fsdev_rpc.o 00:03:35.799 LIB libspdk_init.a 00:03:35.799 SO libspdk_init.so.6.0 00:03:35.799 LIB libspdk_vfu_tgt.a 00:03:35.799 SO libspdk_vfu_tgt.so.3.0 00:03:35.799 LIB libspdk_virtio.a 00:03:35.799 SYMLINK libspdk_init.so 00:03:35.799 SO libspdk_virtio.so.7.0 00:03:35.799 SYMLINK libspdk_vfu_tgt.so 00:03:35.799 SYMLINK libspdk_virtio.so 00:03:36.058 LIB libspdk_fsdev.a 00:03:36.058 CC lib/event/reactor.o 00:03:36.058 CC lib/event/app.o 00:03:36.058 CC lib/event/log_rpc.o 00:03:36.058 CC lib/event/app_rpc.o 00:03:36.058 SO libspdk_fsdev.so.1.0 00:03:36.058 CC lib/event/scheduler_static.o 00:03:36.058 SYMLINK libspdk_fsdev.so 00:03:36.318 LIB libspdk_accel.a 00:03:36.318 SO libspdk_accel.so.16.0 00:03:36.318 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:36.318 LIB libspdk_event.a 00:03:36.318 LIB libspdk_nvme.a 00:03:36.318 SYMLINK libspdk_accel.so 00:03:36.581 SO libspdk_event.so.14.0 00:03:36.581 SYMLINK libspdk_event.so 00:03:36.581 SO libspdk_nvme.so.14.0 00:03:36.581 CC lib/bdev/bdev.o 00:03:36.581 CC lib/bdev/bdev_rpc.o 00:03:36.581 CC lib/bdev/bdev_zone.o 00:03:36.581 CC lib/bdev/part.o 00:03:36.581 CC lib/bdev/scsi_nvme.o 00:03:36.845 SYMLINK libspdk_nvme.so 00:03:36.845 LIB libspdk_fuse_dispatcher.a 00:03:36.845 SO libspdk_fuse_dispatcher.so.1.0 00:03:37.103 SYMLINK libspdk_fuse_dispatcher.so 00:03:37.670 LIB libspdk_blob.a 00:03:37.670 SO libspdk_blob.so.11.0 00:03:37.670 SYMLINK libspdk_blob.so 00:03:37.929 CC lib/lvol/lvol.o 00:03:38.188 CC lib/blobfs/blobfs.o 00:03:38.188 CC lib/blobfs/tree.o 00:03:38.446 LIB libspdk_bdev.a 00:03:38.705 SO libspdk_bdev.so.16.0 00:03:38.705 LIB libspdk_blobfs.a 00:03:38.705 SYMLINK libspdk_bdev.so 00:03:38.705 SO libspdk_blobfs.so.10.0 00:03:38.705 LIB libspdk_lvol.a 00:03:38.705 SO libspdk_lvol.so.10.0 00:03:38.705 SYMLINK libspdk_blobfs.so 00:03:38.705 SYMLINK libspdk_lvol.so 00:03:38.964 CC lib/nvmf/ctrlr.o 00:03:38.964 CC lib/ublk/ublk_rpc.o 00:03:38.964 CC lib/ublk/ublk.o 00:03:38.964 CC lib/nvmf/ctrlr_discovery.o 00:03:38.964 CC lib/scsi/dev.o 00:03:38.964 CC lib/nvmf/ctrlr_bdev.o 00:03:38.964 CC lib/scsi/lun.o 00:03:38.964 CC lib/ftl/ftl_core.o 00:03:38.964 CC lib/nvmf/subsystem.o 00:03:38.964 CC lib/scsi/port.o 00:03:38.964 CC lib/ftl/ftl_init.o 00:03:38.964 CC lib/nvmf/nvmf.o 00:03:38.964 CC lib/scsi/scsi.o 00:03:38.964 CC lib/ftl/ftl_layout.o 00:03:38.964 CC lib/nvmf/nvmf_rpc.o 00:03:38.964 CC lib/scsi/scsi_bdev.o 00:03:38.964 CC lib/ftl/ftl_debug.o 00:03:38.964 CC lib/nvmf/transport.o 00:03:38.964 CC lib/scsi/scsi_pr.o 00:03:38.964 CC lib/ftl/ftl_io.o 00:03:38.964 CC lib/nvmf/tcp.o 00:03:38.964 CC lib/nvmf/stubs.o 00:03:38.964 CC lib/ftl/ftl_sb.o 00:03:38.964 CC lib/scsi/scsi_rpc.o 00:03:38.964 CC lib/nvmf/vfio_user.o 00:03:38.964 CC lib/ftl/ftl_l2p.o 00:03:38.964 CC lib/scsi/task.o 00:03:38.964 CC lib/nbd/nbd.o 00:03:38.964 CC lib/nvmf/mdns_server.o 00:03:38.964 CC lib/ftl/ftl_l2p_flat.o 00:03:38.964 CC lib/nbd/nbd_rpc.o 00:03:38.964 CC lib/nvmf/rdma.o 00:03:38.964 CC lib/ftl/ftl_nv_cache.o 00:03:38.964 CC lib/ftl/ftl_band_ops.o 00:03:38.964 CC lib/ftl/ftl_band.o 00:03:38.964 CC lib/nvmf/auth.o 00:03:38.964 CC lib/ftl/ftl_writer.o 00:03:38.964 CC lib/ftl/ftl_reloc.o 00:03:38.964 CC lib/ftl/ftl_rq.o 00:03:38.964 CC lib/ftl/ftl_l2p_cache.o 00:03:38.964 CC lib/ftl/ftl_p2l.o 00:03:38.964 CC lib/ftl/ftl_p2l_log.o 00:03:38.964 CC lib/ftl/mngt/ftl_mngt.o 00:03:38.965 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:38.965 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:38.965 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:38.965 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:38.965 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:38.965 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:38.965 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:38.965 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:38.965 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:38.965 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:38.965 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:38.965 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:38.965 CC lib/ftl/utils/ftl_conf.o 00:03:38.965 CC lib/ftl/utils/ftl_mempool.o 00:03:38.965 CC lib/ftl/utils/ftl_md.o 00:03:38.965 CC lib/ftl/utils/ftl_property.o 00:03:38.965 CC lib/ftl/utils/ftl_bitmap.o 00:03:38.965 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:38.965 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:38.965 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:38.965 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:38.965 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:38.965 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:38.965 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:38.965 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:38.965 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:38.965 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:38.965 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:38.965 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:38.965 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:38.965 CC lib/ftl/base/ftl_base_dev.o 00:03:38.965 CC lib/ftl/ftl_trace.o 00:03:38.965 CC lib/ftl/base/ftl_base_bdev.o 00:03:39.533 LIB libspdk_nbd.a 00:03:39.533 SO libspdk_nbd.so.7.0 00:03:39.533 SYMLINK libspdk_nbd.so 00:03:39.792 LIB libspdk_scsi.a 00:03:39.792 LIB libspdk_ublk.a 00:03:39.792 SO libspdk_scsi.so.9.0 00:03:39.792 SO libspdk_ublk.so.3.0 00:03:39.792 SYMLINK libspdk_scsi.so 00:03:39.792 SYMLINK libspdk_ublk.so 00:03:39.792 LIB libspdk_ftl.a 00:03:40.051 SO libspdk_ftl.so.9.0 00:03:40.051 CC lib/iscsi/init_grp.o 00:03:40.051 CC lib/vhost/vhost_rpc.o 00:03:40.051 CC lib/iscsi/conn.o 00:03:40.051 CC lib/vhost/vhost.o 00:03:40.051 CC lib/iscsi/portal_grp.o 00:03:40.051 CC lib/vhost/vhost_scsi.o 00:03:40.051 CC lib/iscsi/iscsi.o 00:03:40.051 CC lib/vhost/vhost_blk.o 00:03:40.051 CC lib/iscsi/param.o 00:03:40.051 CC lib/vhost/rte_vhost_user.o 00:03:40.051 CC lib/iscsi/tgt_node.o 00:03:40.051 CC lib/iscsi/iscsi_subsystem.o 00:03:40.051 CC lib/iscsi/iscsi_rpc.o 00:03:40.051 CC lib/iscsi/task.o 00:03:40.310 SYMLINK libspdk_ftl.so 00:03:40.879 LIB libspdk_nvmf.a 00:03:40.880 SO libspdk_nvmf.so.19.0 00:03:40.880 LIB libspdk_vhost.a 00:03:40.880 SO libspdk_vhost.so.8.0 00:03:40.880 SYMLINK libspdk_nvmf.so 00:03:41.140 SYMLINK libspdk_vhost.so 00:03:41.140 LIB libspdk_iscsi.a 00:03:41.140 SO libspdk_iscsi.so.8.0 00:03:41.140 SYMLINK libspdk_iscsi.so 00:03:41.711 CC module/vfu_device/vfu_virtio.o 00:03:41.711 CC module/vfu_device/vfu_virtio_scsi.o 00:03:41.711 CC module/env_dpdk/env_dpdk_rpc.o 00:03:41.711 CC module/vfu_device/vfu_virtio_blk.o 00:03:41.711 CC module/vfu_device/vfu_virtio_fs.o 00:03:41.711 CC module/vfu_device/vfu_virtio_rpc.o 00:03:41.970 CC module/accel/error/accel_error.o 00:03:41.970 CC module/keyring/linux/keyring.o 00:03:41.970 CC module/accel/error/accel_error_rpc.o 00:03:41.970 CC module/keyring/linux/keyring_rpc.o 00:03:41.970 LIB libspdk_env_dpdk_rpc.a 00:03:41.970 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:41.970 CC module/keyring/file/keyring.o 00:03:41.970 CC module/fsdev/aio/fsdev_aio.o 00:03:41.970 CC module/keyring/file/keyring_rpc.o 00:03:41.970 CC module/sock/posix/posix.o 00:03:41.970 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:41.970 CC module/blob/bdev/blob_bdev.o 00:03:41.970 CC module/fsdev/aio/linux_aio_mgr.o 00:03:41.970 CC module/accel/ioat/accel_ioat.o 00:03:41.970 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:41.970 CC module/accel/ioat/accel_ioat_rpc.o 00:03:41.970 CC module/scheduler/gscheduler/gscheduler.o 00:03:41.970 CC module/accel/iaa/accel_iaa.o 00:03:41.970 CC module/accel/iaa/accel_iaa_rpc.o 00:03:41.970 CC module/accel/dsa/accel_dsa.o 00:03:41.970 CC module/accel/dsa/accel_dsa_rpc.o 00:03:41.970 SO libspdk_env_dpdk_rpc.so.6.0 00:03:41.970 SYMLINK libspdk_env_dpdk_rpc.so 00:03:41.970 LIB libspdk_keyring_linux.a 00:03:41.970 LIB libspdk_keyring_file.a 00:03:41.970 LIB libspdk_scheduler_gscheduler.a 00:03:41.970 LIB libspdk_scheduler_dpdk_governor.a 00:03:41.970 SO libspdk_keyring_linux.so.1.0 00:03:41.970 LIB libspdk_accel_error.a 00:03:41.970 SO libspdk_scheduler_gscheduler.so.4.0 00:03:41.970 LIB libspdk_accel_ioat.a 00:03:41.970 SO libspdk_keyring_file.so.2.0 00:03:41.970 LIB libspdk_scheduler_dynamic.a 00:03:41.970 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:42.233 SO libspdk_accel_error.so.2.0 00:03:42.233 SO libspdk_accel_ioat.so.6.0 00:03:42.233 LIB libspdk_accel_iaa.a 00:03:42.233 SYMLINK libspdk_scheduler_gscheduler.so 00:03:42.233 SO libspdk_scheduler_dynamic.so.4.0 00:03:42.233 SYMLINK libspdk_keyring_linux.so 00:03:42.233 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:42.233 SO libspdk_accel_iaa.so.3.0 00:03:42.233 SYMLINK libspdk_keyring_file.so 00:03:42.233 LIB libspdk_blob_bdev.a 00:03:42.233 SYMLINK libspdk_scheduler_dynamic.so 00:03:42.233 SO libspdk_blob_bdev.so.11.0 00:03:42.233 SYMLINK libspdk_accel_error.so 00:03:42.233 SYMLINK libspdk_accel_ioat.so 00:03:42.233 LIB libspdk_accel_dsa.a 00:03:42.233 SYMLINK libspdk_accel_iaa.so 00:03:42.233 SO libspdk_accel_dsa.so.5.0 00:03:42.233 SYMLINK libspdk_blob_bdev.so 00:03:42.233 LIB libspdk_vfu_device.a 00:03:42.233 SYMLINK libspdk_accel_dsa.so 00:03:42.233 SO libspdk_vfu_device.so.3.0 00:03:42.233 SYMLINK libspdk_vfu_device.so 00:03:42.492 LIB libspdk_fsdev_aio.a 00:03:42.492 SO libspdk_fsdev_aio.so.1.0 00:03:42.492 LIB libspdk_sock_posix.a 00:03:42.492 SO libspdk_sock_posix.so.6.0 00:03:42.492 SYMLINK libspdk_fsdev_aio.so 00:03:42.492 SYMLINK libspdk_sock_posix.so 00:03:42.750 CC module/bdev/raid/bdev_raid.o 00:03:42.750 CC module/bdev/raid/bdev_raid_rpc.o 00:03:42.750 CC module/bdev/passthru/vbdev_passthru.o 00:03:42.750 CC module/bdev/raid/raid0.o 00:03:42.750 CC module/bdev/raid/bdev_raid_sb.o 00:03:42.750 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:42.750 CC module/blobfs/bdev/blobfs_bdev.o 00:03:42.750 CC module/bdev/raid/concat.o 00:03:42.750 CC module/bdev/raid/raid1.o 00:03:42.750 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:42.750 CC module/bdev/aio/bdev_aio.o 00:03:42.750 CC module/bdev/error/vbdev_error.o 00:03:42.750 CC module/bdev/nvme/bdev_nvme.o 00:03:42.750 CC module/bdev/error/vbdev_error_rpc.o 00:03:42.750 CC module/bdev/aio/bdev_aio_rpc.o 00:03:42.750 CC module/bdev/nvme/nvme_rpc.o 00:03:42.750 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:42.750 CC module/bdev/nvme/bdev_mdns_client.o 00:03:42.750 CC module/bdev/nvme/vbdev_opal.o 00:03:42.750 CC module/bdev/null/bdev_null.o 00:03:42.750 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:42.750 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:42.750 CC module/bdev/lvol/vbdev_lvol.o 00:03:42.750 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:42.750 CC module/bdev/null/bdev_null_rpc.o 00:03:42.750 CC module/bdev/gpt/gpt.o 00:03:42.750 CC module/bdev/iscsi/bdev_iscsi.o 00:03:42.750 CC module/bdev/gpt/vbdev_gpt.o 00:03:42.750 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:42.750 CC module/bdev/malloc/bdev_malloc.o 00:03:42.750 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:42.750 CC module/bdev/delay/vbdev_delay.o 00:03:42.750 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:42.750 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:42.750 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:42.750 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:42.750 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:42.750 CC module/bdev/ftl/bdev_ftl.o 00:03:42.750 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:42.750 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:42.750 CC module/bdev/split/vbdev_split.o 00:03:42.750 CC module/bdev/split/vbdev_split_rpc.o 00:03:43.009 LIB libspdk_blobfs_bdev.a 00:03:43.009 SO libspdk_blobfs_bdev.so.6.0 00:03:43.009 SYMLINK libspdk_blobfs_bdev.so 00:03:43.009 LIB libspdk_bdev_gpt.a 00:03:43.009 LIB libspdk_bdev_passthru.a 00:03:43.009 LIB libspdk_bdev_split.a 00:03:43.009 LIB libspdk_bdev_error.a 00:03:43.009 LIB libspdk_bdev_ftl.a 00:03:43.009 LIB libspdk_bdev_null.a 00:03:43.009 SO libspdk_bdev_gpt.so.6.0 00:03:43.009 SO libspdk_bdev_error.so.6.0 00:03:43.009 SO libspdk_bdev_split.so.6.0 00:03:43.009 SO libspdk_bdev_ftl.so.6.0 00:03:43.009 SO libspdk_bdev_passthru.so.6.0 00:03:43.009 LIB libspdk_bdev_zone_block.a 00:03:43.009 SO libspdk_bdev_null.so.6.0 00:03:43.009 LIB libspdk_bdev_iscsi.a 00:03:43.009 LIB libspdk_bdev_malloc.a 00:03:43.009 SYMLINK libspdk_bdev_error.so 00:03:43.009 SYMLINK libspdk_bdev_split.so 00:03:43.009 SYMLINK libspdk_bdev_gpt.so 00:03:43.009 LIB libspdk_bdev_aio.a 00:03:43.009 SO libspdk_bdev_zone_block.so.6.0 00:03:43.009 SYMLINK libspdk_bdev_ftl.so 00:03:43.009 SO libspdk_bdev_iscsi.so.6.0 00:03:43.009 SYMLINK libspdk_bdev_passthru.so 00:03:43.009 SO libspdk_bdev_malloc.so.6.0 00:03:43.009 LIB libspdk_bdev_delay.a 00:03:43.009 SO libspdk_bdev_aio.so.6.0 00:03:43.009 SYMLINK libspdk_bdev_null.so 00:03:43.009 SO libspdk_bdev_delay.so.6.0 00:03:43.269 SYMLINK libspdk_bdev_zone_block.so 00:03:43.269 SYMLINK libspdk_bdev_malloc.so 00:03:43.269 SYMLINK libspdk_bdev_iscsi.so 00:03:43.269 SYMLINK libspdk_bdev_aio.so 00:03:43.269 LIB libspdk_bdev_lvol.a 00:03:43.269 SYMLINK libspdk_bdev_delay.so 00:03:43.269 LIB libspdk_bdev_virtio.a 00:03:43.269 SO libspdk_bdev_lvol.so.6.0 00:03:43.269 SO libspdk_bdev_virtio.so.6.0 00:03:43.269 SYMLINK libspdk_bdev_virtio.so 00:03:43.269 SYMLINK libspdk_bdev_lvol.so 00:03:43.528 LIB libspdk_bdev_raid.a 00:03:43.528 SO libspdk_bdev_raid.so.6.0 00:03:43.528 SYMLINK libspdk_bdev_raid.so 00:03:44.467 LIB libspdk_bdev_nvme.a 00:03:44.467 SO libspdk_bdev_nvme.so.7.0 00:03:44.467 SYMLINK libspdk_bdev_nvme.so 00:03:45.036 CC module/event/subsystems/iobuf/iobuf.o 00:03:45.036 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:45.036 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:45.036 CC module/event/subsystems/scheduler/scheduler.o 00:03:45.036 CC module/event/subsystems/sock/sock.o 00:03:45.036 CC module/event/subsystems/fsdev/fsdev.o 00:03:45.036 CC module/event/subsystems/vmd/vmd.o 00:03:45.036 CC module/event/subsystems/keyring/keyring.o 00:03:45.036 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:45.036 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:45.294 LIB libspdk_event_vhost_blk.a 00:03:45.294 LIB libspdk_event_scheduler.a 00:03:45.294 LIB libspdk_event_sock.a 00:03:45.294 LIB libspdk_event_keyring.a 00:03:45.294 LIB libspdk_event_iobuf.a 00:03:45.294 LIB libspdk_event_fsdev.a 00:03:45.294 SO libspdk_event_vhost_blk.so.3.0 00:03:45.294 SO libspdk_event_sock.so.5.0 00:03:45.294 SO libspdk_event_scheduler.so.4.0 00:03:45.294 SO libspdk_event_keyring.so.1.0 00:03:45.294 LIB libspdk_event_vmd.a 00:03:45.294 LIB libspdk_event_vfu_tgt.a 00:03:45.294 SO libspdk_event_iobuf.so.3.0 00:03:45.294 SO libspdk_event_fsdev.so.1.0 00:03:45.294 SO libspdk_event_vfu_tgt.so.3.0 00:03:45.294 SO libspdk_event_vmd.so.6.0 00:03:45.294 SYMLINK libspdk_event_sock.so 00:03:45.294 SYMLINK libspdk_event_scheduler.so 00:03:45.294 SYMLINK libspdk_event_vhost_blk.so 00:03:45.294 SYMLINK libspdk_event_keyring.so 00:03:45.294 SYMLINK libspdk_event_iobuf.so 00:03:45.294 SYMLINK libspdk_event_fsdev.so 00:03:45.294 SYMLINK libspdk_event_vfu_tgt.so 00:03:45.294 SYMLINK libspdk_event_vmd.so 00:03:45.863 CC module/event/subsystems/accel/accel.o 00:03:45.863 LIB libspdk_event_accel.a 00:03:45.863 SO libspdk_event_accel.so.6.0 00:03:45.863 SYMLINK libspdk_event_accel.so 00:03:46.121 CC module/event/subsystems/bdev/bdev.o 00:03:46.379 LIB libspdk_event_bdev.a 00:03:46.379 SO libspdk_event_bdev.so.6.0 00:03:46.379 SYMLINK libspdk_event_bdev.so 00:03:46.636 CC module/event/subsystems/scsi/scsi.o 00:03:46.894 CC module/event/subsystems/ublk/ublk.o 00:03:46.894 CC module/event/subsystems/nbd/nbd.o 00:03:46.894 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:46.894 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:46.894 LIB libspdk_event_ublk.a 00:03:46.894 LIB libspdk_event_nbd.a 00:03:46.894 LIB libspdk_event_scsi.a 00:03:46.894 SO libspdk_event_ublk.so.3.0 00:03:46.894 SO libspdk_event_nbd.so.6.0 00:03:46.894 SO libspdk_event_scsi.so.6.0 00:03:46.894 SYMLINK libspdk_event_ublk.so 00:03:46.894 LIB libspdk_event_nvmf.a 00:03:46.894 SYMLINK libspdk_event_nbd.so 00:03:46.894 SYMLINK libspdk_event_scsi.so 00:03:47.152 SO libspdk_event_nvmf.so.6.0 00:03:47.152 SYMLINK libspdk_event_nvmf.so 00:03:47.152 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:47.411 CC module/event/subsystems/iscsi/iscsi.o 00:03:47.411 LIB libspdk_event_vhost_scsi.a 00:03:47.411 SO libspdk_event_vhost_scsi.so.3.0 00:03:47.411 LIB libspdk_event_iscsi.a 00:03:47.411 SYMLINK libspdk_event_vhost_scsi.so 00:03:47.411 SO libspdk_event_iscsi.so.6.0 00:03:47.670 SYMLINK libspdk_event_iscsi.so 00:03:47.670 SO libspdk.so.6.0 00:03:47.670 SYMLINK libspdk.so 00:03:47.952 CC app/spdk_nvme_perf/perf.o 00:03:47.952 CC app/spdk_nvme_identify/identify.o 00:03:47.952 CC app/spdk_top/spdk_top.o 00:03:47.952 CXX app/trace/trace.o 00:03:47.952 CC app/spdk_nvme_discover/discovery_aer.o 00:03:47.952 CC app/trace_record/trace_record.o 00:03:47.952 CC app/spdk_lspci/spdk_lspci.o 00:03:47.952 CC test/rpc_client/rpc_client_test.o 00:03:47.952 TEST_HEADER include/spdk/assert.h 00:03:47.952 TEST_HEADER include/spdk/accel_module.h 00:03:47.952 TEST_HEADER include/spdk/accel.h 00:03:47.952 TEST_HEADER include/spdk/barrier.h 00:03:47.952 TEST_HEADER include/spdk/base64.h 00:03:47.952 TEST_HEADER include/spdk/bdev.h 00:03:47.952 TEST_HEADER include/spdk/bdev_module.h 00:03:47.952 TEST_HEADER include/spdk/bit_pool.h 00:03:47.952 TEST_HEADER include/spdk/bit_array.h 00:03:47.952 TEST_HEADER include/spdk/bdev_zone.h 00:03:47.952 TEST_HEADER include/spdk/blobfs.h 00:03:47.952 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:47.952 TEST_HEADER include/spdk/blob_bdev.h 00:03:47.952 TEST_HEADER include/spdk/blob.h 00:03:47.952 TEST_HEADER include/spdk/conf.h 00:03:47.952 TEST_HEADER include/spdk/cpuset.h 00:03:47.952 TEST_HEADER include/spdk/config.h 00:03:47.952 TEST_HEADER include/spdk/crc64.h 00:03:47.952 TEST_HEADER include/spdk/crc32.h 00:03:47.952 TEST_HEADER include/spdk/crc16.h 00:03:47.952 CC app/nvmf_tgt/nvmf_main.o 00:03:47.952 TEST_HEADER include/spdk/dma.h 00:03:47.952 TEST_HEADER include/spdk/dif.h 00:03:47.952 TEST_HEADER include/spdk/endian.h 00:03:47.952 TEST_HEADER include/spdk/event.h 00:03:47.952 TEST_HEADER include/spdk/env_dpdk.h 00:03:47.952 TEST_HEADER include/spdk/env.h 00:03:47.952 TEST_HEADER include/spdk/fd_group.h 00:03:47.952 TEST_HEADER include/spdk/file.h 00:03:47.952 TEST_HEADER include/spdk/fd.h 00:03:47.952 TEST_HEADER include/spdk/fsdev_module.h 00:03:47.952 TEST_HEADER include/spdk/fsdev.h 00:03:47.952 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:47.952 TEST_HEADER include/spdk/gpt_spec.h 00:03:47.952 TEST_HEADER include/spdk/ftl.h 00:03:48.221 TEST_HEADER include/spdk/hexlify.h 00:03:48.221 TEST_HEADER include/spdk/idxd.h 00:03:48.221 TEST_HEADER include/spdk/histogram_data.h 00:03:48.221 TEST_HEADER include/spdk/idxd_spec.h 00:03:48.221 TEST_HEADER include/spdk/ioat.h 00:03:48.221 TEST_HEADER include/spdk/init.h 00:03:48.221 TEST_HEADER include/spdk/ioat_spec.h 00:03:48.221 TEST_HEADER include/spdk/iscsi_spec.h 00:03:48.221 TEST_HEADER include/spdk/json.h 00:03:48.221 TEST_HEADER include/spdk/keyring.h 00:03:48.221 TEST_HEADER include/spdk/jsonrpc.h 00:03:48.221 TEST_HEADER include/spdk/keyring_module.h 00:03:48.221 CC app/iscsi_tgt/iscsi_tgt.o 00:03:48.221 TEST_HEADER include/spdk/log.h 00:03:48.221 TEST_HEADER include/spdk/md5.h 00:03:48.221 TEST_HEADER include/spdk/lvol.h 00:03:48.221 TEST_HEADER include/spdk/likely.h 00:03:48.221 TEST_HEADER include/spdk/memory.h 00:03:48.221 TEST_HEADER include/spdk/mmio.h 00:03:48.221 TEST_HEADER include/spdk/notify.h 00:03:48.221 TEST_HEADER include/spdk/nbd.h 00:03:48.221 TEST_HEADER include/spdk/net.h 00:03:48.221 TEST_HEADER include/spdk/nvme_intel.h 00:03:48.221 TEST_HEADER include/spdk/nvme.h 00:03:48.221 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:48.221 TEST_HEADER include/spdk/nvme_spec.h 00:03:48.221 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:48.221 CC app/spdk_dd/spdk_dd.o 00:03:48.221 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:48.221 TEST_HEADER include/spdk/nvme_zns.h 00:03:48.221 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:48.221 TEST_HEADER include/spdk/nvmf.h 00:03:48.221 TEST_HEADER include/spdk/nvmf_spec.h 00:03:48.221 TEST_HEADER include/spdk/opal.h 00:03:48.221 TEST_HEADER include/spdk/nvmf_transport.h 00:03:48.221 TEST_HEADER include/spdk/pci_ids.h 00:03:48.221 TEST_HEADER include/spdk/opal_spec.h 00:03:48.221 TEST_HEADER include/spdk/pipe.h 00:03:48.221 TEST_HEADER include/spdk/queue.h 00:03:48.221 TEST_HEADER include/spdk/reduce.h 00:03:48.221 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:48.221 TEST_HEADER include/spdk/rpc.h 00:03:48.221 TEST_HEADER include/spdk/scheduler.h 00:03:48.221 TEST_HEADER include/spdk/scsi_spec.h 00:03:48.221 TEST_HEADER include/spdk/stdinc.h 00:03:48.221 TEST_HEADER include/spdk/sock.h 00:03:48.221 TEST_HEADER include/spdk/scsi.h 00:03:48.221 TEST_HEADER include/spdk/thread.h 00:03:48.221 TEST_HEADER include/spdk/string.h 00:03:48.221 TEST_HEADER include/spdk/trace_parser.h 00:03:48.221 TEST_HEADER include/spdk/trace.h 00:03:48.221 TEST_HEADER include/spdk/ublk.h 00:03:48.221 TEST_HEADER include/spdk/tree.h 00:03:48.221 TEST_HEADER include/spdk/util.h 00:03:48.221 TEST_HEADER include/spdk/uuid.h 00:03:48.221 TEST_HEADER include/spdk/version.h 00:03:48.221 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:48.221 CC app/spdk_tgt/spdk_tgt.o 00:03:48.221 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:48.221 TEST_HEADER include/spdk/vmd.h 00:03:48.221 TEST_HEADER include/spdk/vhost.h 00:03:48.221 TEST_HEADER include/spdk/zipf.h 00:03:48.221 TEST_HEADER include/spdk/xor.h 00:03:48.221 CXX test/cpp_headers/accel.o 00:03:48.221 CXX test/cpp_headers/assert.o 00:03:48.221 CXX test/cpp_headers/accel_module.o 00:03:48.221 CXX test/cpp_headers/barrier.o 00:03:48.221 CXX test/cpp_headers/bdev.o 00:03:48.221 CXX test/cpp_headers/bit_pool.o 00:03:48.221 CXX test/cpp_headers/bdev_module.o 00:03:48.221 CXX test/cpp_headers/base64.o 00:03:48.221 CXX test/cpp_headers/bdev_zone.o 00:03:48.221 CXX test/cpp_headers/blob_bdev.o 00:03:48.221 CXX test/cpp_headers/bit_array.o 00:03:48.221 CXX test/cpp_headers/blobfs_bdev.o 00:03:48.221 CXX test/cpp_headers/conf.o 00:03:48.221 CXX test/cpp_headers/blobfs.o 00:03:48.221 CXX test/cpp_headers/cpuset.o 00:03:48.221 CXX test/cpp_headers/blob.o 00:03:48.221 CXX test/cpp_headers/config.o 00:03:48.221 CXX test/cpp_headers/crc16.o 00:03:48.221 CXX test/cpp_headers/crc64.o 00:03:48.221 CXX test/cpp_headers/crc32.o 00:03:48.221 CXX test/cpp_headers/dma.o 00:03:48.221 CXX test/cpp_headers/endian.o 00:03:48.221 CXX test/cpp_headers/env.o 00:03:48.221 CXX test/cpp_headers/dif.o 00:03:48.221 CXX test/cpp_headers/env_dpdk.o 00:03:48.221 CXX test/cpp_headers/event.o 00:03:48.221 CXX test/cpp_headers/fd.o 00:03:48.221 CXX test/cpp_headers/fd_group.o 00:03:48.221 CXX test/cpp_headers/file.o 00:03:48.221 CXX test/cpp_headers/ftl.o 00:03:48.221 CXX test/cpp_headers/fsdev.o 00:03:48.221 CXX test/cpp_headers/fsdev_module.o 00:03:48.221 CXX test/cpp_headers/fuse_dispatcher.o 00:03:48.221 CXX test/cpp_headers/gpt_spec.o 00:03:48.221 CXX test/cpp_headers/histogram_data.o 00:03:48.221 CXX test/cpp_headers/hexlify.o 00:03:48.221 CXX test/cpp_headers/idxd.o 00:03:48.221 CXX test/cpp_headers/idxd_spec.o 00:03:48.221 CXX test/cpp_headers/init.o 00:03:48.221 CXX test/cpp_headers/ioat.o 00:03:48.221 CXX test/cpp_headers/iscsi_spec.o 00:03:48.221 CXX test/cpp_headers/json.o 00:03:48.221 CXX test/cpp_headers/ioat_spec.o 00:03:48.221 CXX test/cpp_headers/jsonrpc.o 00:03:48.221 CXX test/cpp_headers/keyring_module.o 00:03:48.221 CXX test/cpp_headers/keyring.o 00:03:48.221 CXX test/cpp_headers/log.o 00:03:48.221 CXX test/cpp_headers/likely.o 00:03:48.221 CXX test/cpp_headers/memory.o 00:03:48.221 CXX test/cpp_headers/lvol.o 00:03:48.221 CXX test/cpp_headers/mmio.o 00:03:48.221 CXX test/cpp_headers/nbd.o 00:03:48.221 CXX test/cpp_headers/net.o 00:03:48.221 CXX test/cpp_headers/md5.o 00:03:48.221 CXX test/cpp_headers/notify.o 00:03:48.221 CXX test/cpp_headers/nvme_intel.o 00:03:48.221 CXX test/cpp_headers/nvme_ocssd.o 00:03:48.221 CXX test/cpp_headers/nvme.o 00:03:48.221 CXX test/cpp_headers/nvme_spec.o 00:03:48.221 CXX test/cpp_headers/nvme_zns.o 00:03:48.221 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:48.221 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:48.221 CXX test/cpp_headers/nvmf.o 00:03:48.221 CXX test/cpp_headers/nvmf_cmd.o 00:03:48.221 CXX test/cpp_headers/nvmf_spec.o 00:03:48.221 CXX test/cpp_headers/nvmf_transport.o 00:03:48.221 CXX test/cpp_headers/opal.o 00:03:48.221 CC test/app/histogram_perf/histogram_perf.o 00:03:48.221 CC test/env/vtophys/vtophys.o 00:03:48.221 CC examples/util/zipf/zipf.o 00:03:48.221 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:48.221 CC test/thread/poller_perf/poller_perf.o 00:03:48.221 CC examples/ioat/verify/verify.o 00:03:48.221 CXX test/cpp_headers/opal_spec.o 00:03:48.221 CC test/env/memory/memory_ut.o 00:03:48.221 CC app/fio/nvme/fio_plugin.o 00:03:48.221 CC test/env/pci/pci_ut.o 00:03:48.221 CC test/app/stub/stub.o 00:03:48.221 CC examples/ioat/perf/perf.o 00:03:48.221 CC test/app/bdev_svc/bdev_svc.o 00:03:48.221 CC test/app/jsoncat/jsoncat.o 00:03:48.484 CC app/fio/bdev/fio_plugin.o 00:03:48.484 CC test/dma/test_dma/test_dma.o 00:03:48.485 LINK spdk_lspci 00:03:48.485 LINK spdk_nvme_discover 00:03:48.749 LINK interrupt_tgt 00:03:48.749 CC test/env/mem_callbacks/mem_callbacks.o 00:03:48.749 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:48.749 LINK rpc_client_test 00:03:48.749 LINK vtophys 00:03:48.749 LINK nvmf_tgt 00:03:48.749 CXX test/cpp_headers/pci_ids.o 00:03:48.749 CXX test/cpp_headers/pipe.o 00:03:48.749 CXX test/cpp_headers/queue.o 00:03:48.749 CXX test/cpp_headers/reduce.o 00:03:48.749 CXX test/cpp_headers/rpc.o 00:03:48.749 CXX test/cpp_headers/scheduler.o 00:03:48.749 LINK jsoncat 00:03:48.749 CXX test/cpp_headers/scsi.o 00:03:48.749 CXX test/cpp_headers/scsi_spec.o 00:03:48.749 CXX test/cpp_headers/sock.o 00:03:48.749 CXX test/cpp_headers/stdinc.o 00:03:48.749 CXX test/cpp_headers/thread.o 00:03:48.749 CXX test/cpp_headers/string.o 00:03:48.749 CXX test/cpp_headers/trace.o 00:03:48.749 CXX test/cpp_headers/trace_parser.o 00:03:48.749 CXX test/cpp_headers/tree.o 00:03:48.749 CXX test/cpp_headers/ublk.o 00:03:48.749 CXX test/cpp_headers/util.o 00:03:48.749 CXX test/cpp_headers/uuid.o 00:03:48.749 CXX test/cpp_headers/version.o 00:03:48.749 CXX test/cpp_headers/vfio_user_pci.o 00:03:48.749 CXX test/cpp_headers/vfio_user_spec.o 00:03:48.749 CXX test/cpp_headers/vhost.o 00:03:48.749 CXX test/cpp_headers/vmd.o 00:03:48.749 LINK iscsi_tgt 00:03:48.749 CXX test/cpp_headers/zipf.o 00:03:48.749 CXX test/cpp_headers/xor.o 00:03:48.749 LINK bdev_svc 00:03:48.749 LINK verify 00:03:48.749 LINK spdk_trace_record 00:03:48.749 LINK histogram_perf 00:03:48.749 LINK poller_perf 00:03:48.749 LINK env_dpdk_post_init 00:03:48.749 LINK zipf 00:03:49.006 LINK spdk_tgt 00:03:49.006 LINK stub 00:03:49.006 LINK spdk_dd 00:03:49.006 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:49.006 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:49.006 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:49.006 LINK ioat_perf 00:03:49.006 LINK mem_callbacks 00:03:49.263 LINK pci_ut 00:03:49.263 LINK spdk_trace 00:03:49.263 LINK spdk_nvme 00:03:49.263 LINK nvme_fuzz 00:03:49.263 LINK spdk_nvme_perf 00:03:49.263 LINK spdk_top 00:03:49.263 LINK spdk_bdev 00:03:49.263 LINK test_dma 00:03:49.263 LINK spdk_nvme_identify 00:03:49.263 CC examples/vmd/lsvmd/lsvmd.o 00:03:49.263 CC examples/sock/hello_world/hello_sock.o 00:03:49.263 CC examples/vmd/led/led.o 00:03:49.263 CC test/event/reactor_perf/reactor_perf.o 00:03:49.263 CC test/event/reactor/reactor.o 00:03:49.263 CC test/event/event_perf/event_perf.o 00:03:49.263 LINK vhost_fuzz 00:03:49.263 CC examples/idxd/perf/perf.o 00:03:49.521 CC test/event/app_repeat/app_repeat.o 00:03:49.521 CC test/event/scheduler/scheduler.o 00:03:49.521 CC examples/thread/thread/thread_ex.o 00:03:49.521 LINK memory_ut 00:03:49.521 CC app/vhost/vhost.o 00:03:49.521 LINK reactor_perf 00:03:49.521 LINK lsvmd 00:03:49.521 LINK led 00:03:49.521 LINK reactor 00:03:49.521 LINK event_perf 00:03:49.521 LINK app_repeat 00:03:49.521 LINK hello_sock 00:03:49.778 LINK scheduler 00:03:49.778 LINK idxd_perf 00:03:49.778 LINK vhost 00:03:49.778 LINK thread 00:03:49.778 CC test/nvme/simple_copy/simple_copy.o 00:03:49.778 CC test/nvme/aer/aer.o 00:03:49.778 CC test/nvme/e2edp/nvme_dp.o 00:03:49.778 CC test/nvme/cuse/cuse.o 00:03:49.778 CC test/nvme/err_injection/err_injection.o 00:03:49.778 CC test/nvme/reset/reset.o 00:03:49.778 CC test/nvme/connect_stress/connect_stress.o 00:03:49.778 CC test/nvme/overhead/overhead.o 00:03:49.778 CC test/nvme/fdp/fdp.o 00:03:49.778 CC test/nvme/boot_partition/boot_partition.o 00:03:49.778 CC test/nvme/compliance/nvme_compliance.o 00:03:49.778 CC test/nvme/startup/startup.o 00:03:49.778 CC test/nvme/sgl/sgl.o 00:03:49.778 CC test/nvme/fused_ordering/fused_ordering.o 00:03:49.778 CC test/nvme/reserve/reserve.o 00:03:49.778 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:49.778 CC test/accel/dif/dif.o 00:03:49.778 CC test/blobfs/mkfs/mkfs.o 00:03:49.778 CC test/lvol/esnap/esnap.o 00:03:50.035 LINK boot_partition 00:03:50.035 LINK err_injection 00:03:50.035 LINK connect_stress 00:03:50.035 LINK startup 00:03:50.035 LINK simple_copy 00:03:50.035 LINK reserve 00:03:50.035 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:50.035 CC examples/nvme/abort/abort.o 00:03:50.035 CC examples/nvme/hotplug/hotplug.o 00:03:50.035 LINK fused_ordering 00:03:50.035 LINK nvme_dp 00:03:50.035 CC examples/nvme/reconnect/reconnect.o 00:03:50.035 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:50.035 CC examples/nvme/arbitration/arbitration.o 00:03:50.035 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:50.035 LINK doorbell_aers 00:03:50.035 LINK reset 00:03:50.035 LINK aer 00:03:50.035 CC examples/nvme/hello_world/hello_world.o 00:03:50.035 LINK mkfs 00:03:50.035 LINK sgl 00:03:50.035 LINK overhead 00:03:50.035 LINK nvme_compliance 00:03:50.035 LINK fdp 00:03:50.035 CC examples/accel/perf/accel_perf.o 00:03:50.294 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:50.294 LINK pmr_persistence 00:03:50.294 CC examples/blob/cli/blobcli.o 00:03:50.294 CC examples/blob/hello_world/hello_blob.o 00:03:50.294 LINK cmb_copy 00:03:50.294 LINK hotplug 00:03:50.294 LINK hello_world 00:03:50.294 LINK reconnect 00:03:50.294 LINK iscsi_fuzz 00:03:50.294 LINK arbitration 00:03:50.294 LINK abort 00:03:50.294 LINK dif 00:03:50.294 LINK nvme_manage 00:03:50.552 LINK hello_blob 00:03:50.552 LINK hello_fsdev 00:03:50.552 LINK accel_perf 00:03:50.552 LINK blobcli 00:03:50.810 LINK cuse 00:03:50.810 CC test/bdev/bdevio/bdevio.o 00:03:51.068 CC examples/bdev/bdevperf/bdevperf.o 00:03:51.068 CC examples/bdev/hello_world/hello_bdev.o 00:03:51.326 LINK bdevio 00:03:51.326 LINK hello_bdev 00:03:51.584 LINK bdevperf 00:03:52.151 CC examples/nvmf/nvmf/nvmf.o 00:03:52.409 LINK nvmf 00:03:53.346 LINK esnap 00:03:53.605 00:03:53.605 real 0m53.009s 00:03:53.605 user 6m30.182s 00:03:53.605 sys 2m37.690s 00:03:53.605 09:38:22 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:53.605 09:38:22 make -- common/autotest_common.sh@10 -- $ set +x 00:03:53.605 ************************************ 00:03:53.605 END TEST make 00:03:53.605 ************************************ 00:03:53.865 09:38:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:53.865 09:38:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:53.865 09:38:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:53.865 09:38:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.865 09:38:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:53.865 09:38:22 -- pm/common@44 -- $ pid=943972 00:03:53.865 09:38:22 -- pm/common@50 -- $ kill -TERM 943972 00:03:53.865 09:38:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.865 09:38:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:53.865 09:38:22 -- pm/common@44 -- $ pid=943974 00:03:53.865 09:38:22 -- pm/common@50 -- $ kill -TERM 943974 00:03:53.865 09:38:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.865 09:38:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:53.865 09:38:22 -- pm/common@44 -- $ pid=943976 00:03:53.865 09:38:22 -- pm/common@50 -- $ kill -TERM 943976 00:03:53.865 09:38:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.865 09:38:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:53.865 09:38:22 -- pm/common@44 -- $ pid=943998 00:03:53.865 09:38:22 -- pm/common@50 -- $ sudo -E kill -TERM 943998 00:03:53.865 09:38:22 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:53.865 09:38:22 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:53.865 09:38:22 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:53.865 09:38:22 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:53.865 09:38:22 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.865 09:38:22 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.865 09:38:22 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.865 09:38:22 -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.865 09:38:22 -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.865 09:38:22 -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.865 09:38:22 -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.865 09:38:22 -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.866 09:38:22 -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.866 09:38:22 -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.866 09:38:22 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.866 09:38:22 -- scripts/common.sh@344 -- # case "$op" in 00:03:53.866 09:38:22 -- scripts/common.sh@345 -- # : 1 00:03:53.866 09:38:22 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.866 09:38:22 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.866 09:38:22 -- scripts/common.sh@365 -- # decimal 1 00:03:53.866 09:38:22 -- scripts/common.sh@353 -- # local d=1 00:03:53.866 09:38:22 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.866 09:38:22 -- scripts/common.sh@355 -- # echo 1 00:03:53.866 09:38:22 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.866 09:38:22 -- scripts/common.sh@366 -- # decimal 2 00:03:53.866 09:38:22 -- scripts/common.sh@353 -- # local d=2 00:03:53.866 09:38:22 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.866 09:38:22 -- scripts/common.sh@355 -- # echo 2 00:03:53.866 09:38:22 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.866 09:38:22 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.866 09:38:22 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.866 09:38:22 -- scripts/common.sh@368 -- # return 0 00:03:53.866 09:38:22 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.866 09:38:22 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:53.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.866 --rc genhtml_branch_coverage=1 00:03:53.866 --rc genhtml_function_coverage=1 00:03:53.866 --rc genhtml_legend=1 00:03:53.866 --rc geninfo_all_blocks=1 00:03:53.866 --rc geninfo_unexecuted_blocks=1 00:03:53.866 00:03:53.866 ' 00:03:53.866 09:38:22 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:53.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.866 --rc genhtml_branch_coverage=1 00:03:53.866 --rc genhtml_function_coverage=1 00:03:53.866 --rc genhtml_legend=1 00:03:53.866 --rc geninfo_all_blocks=1 00:03:53.866 --rc geninfo_unexecuted_blocks=1 00:03:53.866 00:03:53.866 ' 00:03:53.866 09:38:22 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:53.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.866 --rc genhtml_branch_coverage=1 00:03:53.866 --rc genhtml_function_coverage=1 00:03:53.866 --rc genhtml_legend=1 00:03:53.866 --rc geninfo_all_blocks=1 00:03:53.866 --rc geninfo_unexecuted_blocks=1 00:03:53.866 00:03:53.866 ' 00:03:53.866 09:38:22 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:53.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.866 --rc genhtml_branch_coverage=1 00:03:53.866 --rc genhtml_function_coverage=1 00:03:53.866 --rc genhtml_legend=1 00:03:53.866 --rc geninfo_all_blocks=1 00:03:53.866 --rc geninfo_unexecuted_blocks=1 00:03:53.866 00:03:53.866 ' 00:03:53.866 09:38:22 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:53.866 09:38:22 -- nvmf/common.sh@7 -- # uname -s 00:03:53.866 09:38:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:53.866 09:38:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:53.866 09:38:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:53.866 09:38:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:53.866 09:38:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:53.866 09:38:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:53.866 09:38:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:53.866 09:38:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:53.866 09:38:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:53.866 09:38:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:53.866 09:38:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:53.866 09:38:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:53.866 09:38:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:53.866 09:38:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:53.866 09:38:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:53.866 09:38:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:53.866 09:38:22 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:53.866 09:38:22 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:54.126 09:38:22 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:54.126 09:38:22 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.126 09:38:22 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.126 09:38:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.126 09:38:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.126 09:38:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.126 09:38:22 -- paths/export.sh@5 -- # export PATH 00:03:54.126 09:38:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.126 09:38:22 -- nvmf/common.sh@51 -- # : 0 00:03:54.126 09:38:22 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:54.126 09:38:22 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:54.126 09:38:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:54.126 09:38:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:54.126 09:38:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:54.126 09:38:22 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:54.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:54.126 09:38:22 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:54.126 09:38:22 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:54.126 09:38:22 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:54.126 09:38:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:54.126 09:38:22 -- spdk/autotest.sh@32 -- # uname -s 00:03:54.126 09:38:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:54.126 09:38:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:54.126 09:38:22 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:54.126 09:38:22 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:54.126 09:38:22 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:54.126 09:38:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:54.126 09:38:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:54.126 09:38:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:54.126 09:38:22 -- spdk/autotest.sh@48 -- # udevadm_pid=1021585 00:03:54.126 09:38:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:54.126 09:38:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:54.126 09:38:22 -- pm/common@17 -- # local monitor 00:03:54.126 09:38:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.126 09:38:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.126 09:38:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.126 09:38:22 -- pm/common@21 -- # date +%s 00:03:54.126 09:38:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.126 09:38:22 -- pm/common@21 -- # date +%s 00:03:54.126 09:38:22 -- pm/common@25 -- # sleep 1 00:03:54.126 09:38:22 -- pm/common@21 -- # date +%s 00:03:54.126 09:38:22 -- pm/common@21 -- # date +%s 00:03:54.126 09:38:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733560702 00:03:54.126 09:38:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733560702 00:03:54.126 09:38:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733560702 00:03:54.126 09:38:22 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733560702 00:03:54.126 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733560702_collect-vmstat.pm.log 00:03:54.126 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733560702_collect-cpu-load.pm.log 00:03:54.126 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733560702_collect-cpu-temp.pm.log 00:03:54.126 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733560702_collect-bmc-pm.bmc.pm.log 00:03:55.067 09:38:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:55.067 09:38:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:55.067 09:38:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:55.067 09:38:23 -- common/autotest_common.sh@10 -- # set +x 00:03:55.067 09:38:23 -- spdk/autotest.sh@59 -- # create_test_list 00:03:55.067 09:38:23 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:55.067 09:38:23 -- common/autotest_common.sh@10 -- # set +x 00:03:55.067 09:38:23 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:55.067 09:38:23 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.067 09:38:23 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.067 09:38:23 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:55.067 09:38:23 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.067 09:38:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:55.067 09:38:23 -- common/autotest_common.sh@1455 -- # uname 00:03:55.067 09:38:23 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:55.067 09:38:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:55.067 09:38:23 -- common/autotest_common.sh@1475 -- # uname 00:03:55.067 09:38:23 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:55.067 09:38:23 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:55.067 09:38:23 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:55.067 lcov: LCOV version 1.15 00:03:55.067 09:38:23 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:17.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:17.012 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:20.303 09:38:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:20.303 09:38:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.303 09:38:48 -- common/autotest_common.sh@10 -- # set +x 00:04:20.303 09:38:48 -- spdk/autotest.sh@78 -- # rm -f 00:04:20.303 09:38:48 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.841 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:22.841 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:22.841 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:22.841 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:22.841 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:22.841 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:22.841 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:22.841 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:22.841 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:22.841 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:22.841 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:22.841 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:22.841 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:22.841 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:23.101 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:23.101 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:23.101 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:23.101 09:38:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:23.101 09:38:51 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:23.101 09:38:51 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:23.101 09:38:51 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:23.101 09:38:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.101 09:38:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:23.101 09:38:51 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:23.101 09:38:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.101 09:38:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.101 09:38:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:23.101 09:38:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.101 09:38:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.101 09:38:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:23.101 09:38:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:23.101 09:38:51 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:23.101 No valid GPT data, bailing 00:04:23.101 09:38:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.101 09:38:51 -- scripts/common.sh@394 -- # pt= 00:04:23.101 09:38:51 -- scripts/common.sh@395 -- # return 1 00:04:23.101 09:38:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:23.101 1+0 records in 00:04:23.101 1+0 records out 00:04:23.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00406492 s, 258 MB/s 00:04:23.101 09:38:51 -- spdk/autotest.sh@105 -- # sync 00:04:23.101 09:38:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:23.101 09:38:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:23.101 09:38:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:28.370 09:38:57 -- spdk/autotest.sh@111 -- # uname -s 00:04:28.370 09:38:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:28.370 09:38:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:28.370 09:38:57 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:30.903 Hugepages 00:04:30.903 node hugesize free / total 00:04:30.903 node0 1048576kB 0 / 0 00:04:30.903 node0 2048kB 0 / 0 00:04:30.903 node1 1048576kB 0 / 0 00:04:30.903 node1 2048kB 0 / 0 00:04:30.903 00:04:30.903 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:30.903 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:30.903 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:30.903 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:31.163 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:31.163 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:31.163 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:31.163 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:31.163 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:31.163 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:31.163 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:31.163 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:31.163 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:31.163 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:31.163 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:31.163 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:31.163 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:31.163 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:31.163 09:38:59 -- spdk/autotest.sh@117 -- # uname -s 00:04:31.163 09:38:59 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:31.163 09:38:59 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:31.163 09:38:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:33.701 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:33.701 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:34.271 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:34.531 09:39:03 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:35.471 09:39:04 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:35.471 09:39:04 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:35.471 09:39:04 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:35.471 09:39:04 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:35.471 09:39:04 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:35.471 09:39:04 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:35.471 09:39:04 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.471 09:39:04 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:35.471 09:39:04 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:35.471 09:39:04 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:35.471 09:39:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:35.471 09:39:04 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.762 Waiting for block devices as requested 00:04:38.762 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:38.762 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:38.762 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:38.762 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:38.762 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:38.762 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:38.762 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:38.762 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:38.762 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:39.021 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:39.021 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:39.021 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:39.021 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:39.281 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:39.281 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:39.281 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:39.540 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:39.540 09:39:08 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:39.541 09:39:08 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:39.541 09:39:08 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:39.541 09:39:08 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:04:39.541 09:39:08 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:39.541 09:39:08 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:39.541 09:39:08 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:39.541 09:39:08 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:39.541 09:39:08 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:39.541 09:39:08 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:39.541 09:39:08 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:39.541 09:39:08 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:39.541 09:39:08 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:39.541 09:39:08 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:39.541 09:39:08 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:39.541 09:39:08 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:39.541 09:39:08 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:39.541 09:39:08 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:39.541 09:39:08 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:39.541 09:39:08 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:39.541 09:39:08 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:39.541 09:39:08 -- common/autotest_common.sh@1541 -- # continue 00:04:39.541 09:39:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:39.541 09:39:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:39.541 09:39:08 -- common/autotest_common.sh@10 -- # set +x 00:04:39.541 09:39:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:39.541 09:39:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.541 09:39:08 -- common/autotest_common.sh@10 -- # set +x 00:04:39.541 09:39:08 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.078 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:42.078 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:42.338 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:43.277 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:43.277 09:39:11 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:43.277 09:39:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.277 09:39:11 -- common/autotest_common.sh@10 -- # set +x 00:04:43.277 09:39:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:43.277 09:39:11 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:43.277 09:39:11 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.277 09:39:11 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:43.277 09:39:11 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:43.278 09:39:11 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:43.278 09:39:11 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:43.278 09:39:11 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:43.278 09:39:11 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:43.278 09:39:11 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:43.278 09:39:11 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.278 09:39:11 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:43.278 09:39:11 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:43.278 09:39:11 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:43.278 09:39:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:43.278 09:39:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:43.278 09:39:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:43.278 09:39:11 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:43.278 09:39:11 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:43.278 09:39:11 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:43.278 09:39:11 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:43.278 09:39:11 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:04:43.278 09:39:11 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:04:43.278 09:39:11 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1036320 00:04:43.278 09:39:11 -- common/autotest_common.sh@1583 -- # waitforlisten 1036320 00:04:43.278 09:39:11 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.278 09:39:11 -- common/autotest_common.sh@831 -- # '[' -z 1036320 ']' 00:04:43.278 09:39:11 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.278 09:39:11 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.278 09:39:11 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.278 09:39:11 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.278 09:39:11 -- common/autotest_common.sh@10 -- # set +x 00:04:43.537 [2024-12-07 09:39:12.048215] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:04:43.537 [2024-12-07 09:39:12.048266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036320 ] 00:04:43.537 [2024-12-07 09:39:12.103955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.537 [2024-12-07 09:39:12.144194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.797 09:39:12 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.797 09:39:12 -- common/autotest_common.sh@864 -- # return 0 00:04:43.797 09:39:12 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:43.797 09:39:12 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:43.797 09:39:12 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:47.085 nvme0n1 00:04:47.085 09:39:15 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:47.085 [2024-12-07 09:39:15.515066] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:47.085 request: 00:04:47.085 { 00:04:47.085 "nvme_ctrlr_name": "nvme0", 00:04:47.085 "password": "test", 00:04:47.085 "method": "bdev_nvme_opal_revert", 00:04:47.085 "req_id": 1 00:04:47.085 } 00:04:47.085 Got JSON-RPC error response 00:04:47.085 response: 00:04:47.085 { 00:04:47.085 "code": -32602, 00:04:47.085 "message": "Invalid parameters" 00:04:47.085 } 00:04:47.085 09:39:15 -- common/autotest_common.sh@1589 -- # true 00:04:47.085 09:39:15 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:47.085 09:39:15 -- common/autotest_common.sh@1593 -- # killprocess 1036320 00:04:47.085 09:39:15 -- common/autotest_common.sh@950 -- # '[' -z 1036320 ']' 00:04:47.085 09:39:15 -- common/autotest_common.sh@954 -- # kill -0 1036320 00:04:47.085 09:39:15 -- common/autotest_common.sh@955 -- # uname 00:04:47.085 09:39:15 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.085 09:39:15 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1036320 00:04:47.085 09:39:15 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.085 09:39:15 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.085 09:39:15 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1036320' 00:04:47.085 killing process with pid 1036320 00:04:47.085 09:39:15 -- common/autotest_common.sh@969 -- # kill 1036320 00:04:47.085 09:39:15 -- common/autotest_common.sh@974 -- # wait 1036320 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.085 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.086 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:47.086 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:48.985 09:39:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:48.985 09:39:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:48.985 09:39:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:48.985 09:39:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:48.985 09:39:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:48.985 09:39:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:48.985 09:39:17 -- common/autotest_common.sh@10 -- # set +x 00:04:48.986 09:39:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:48.986 09:39:17 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:48.986 09:39:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.986 09:39:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.986 09:39:17 -- common/autotest_common.sh@10 -- # set +x 00:04:48.986 ************************************ 00:04:48.986 START TEST env 00:04:48.986 ************************************ 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:48.986 * Looking for test storage... 00:04:48.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:48.986 09:39:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.986 09:39:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.986 09:39:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.986 09:39:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.986 09:39:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.986 09:39:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.986 09:39:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.986 09:39:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.986 09:39:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.986 09:39:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.986 09:39:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.986 09:39:17 env -- scripts/common.sh@344 -- # case "$op" in 00:04:48.986 09:39:17 env -- scripts/common.sh@345 -- # : 1 00:04:48.986 09:39:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.986 09:39:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.986 09:39:17 env -- scripts/common.sh@365 -- # decimal 1 00:04:48.986 09:39:17 env -- scripts/common.sh@353 -- # local d=1 00:04:48.986 09:39:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.986 09:39:17 env -- scripts/common.sh@355 -- # echo 1 00:04:48.986 09:39:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.986 09:39:17 env -- scripts/common.sh@366 -- # decimal 2 00:04:48.986 09:39:17 env -- scripts/common.sh@353 -- # local d=2 00:04:48.986 09:39:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.986 09:39:17 env -- scripts/common.sh@355 -- # echo 2 00:04:48.986 09:39:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.986 09:39:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.986 09:39:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.986 09:39:17 env -- scripts/common.sh@368 -- # return 0 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:48.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.986 --rc genhtml_branch_coverage=1 00:04:48.986 --rc genhtml_function_coverage=1 00:04:48.986 --rc genhtml_legend=1 00:04:48.986 --rc geninfo_all_blocks=1 00:04:48.986 --rc geninfo_unexecuted_blocks=1 00:04:48.986 00:04:48.986 ' 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:48.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.986 --rc genhtml_branch_coverage=1 00:04:48.986 --rc genhtml_function_coverage=1 00:04:48.986 --rc genhtml_legend=1 00:04:48.986 --rc geninfo_all_blocks=1 00:04:48.986 --rc geninfo_unexecuted_blocks=1 00:04:48.986 00:04:48.986 ' 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:48.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.986 --rc genhtml_branch_coverage=1 00:04:48.986 --rc genhtml_function_coverage=1 00:04:48.986 --rc genhtml_legend=1 00:04:48.986 --rc geninfo_all_blocks=1 00:04:48.986 --rc geninfo_unexecuted_blocks=1 00:04:48.986 00:04:48.986 ' 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:48.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.986 --rc genhtml_branch_coverage=1 00:04:48.986 --rc genhtml_function_coverage=1 00:04:48.986 --rc genhtml_legend=1 00:04:48.986 --rc geninfo_all_blocks=1 00:04:48.986 --rc geninfo_unexecuted_blocks=1 00:04:48.986 00:04:48.986 ' 00:04:48.986 09:39:17 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.986 09:39:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.986 ************************************ 00:04:48.986 START TEST env_memory 00:04:48.986 ************************************ 00:04:48.986 09:39:17 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.986 00:04:48.986 00:04:48.986 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.986 http://cunit.sourceforge.net/ 00:04:48.986 00:04:48.986 00:04:48.986 Suite: memory 00:04:48.986 Test: alloc and free memory map ...[2024-12-07 09:39:17.495351] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:48.986 passed 00:04:48.986 Test: mem map translation ...[2024-12-07 09:39:17.514016] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:48.986 [2024-12-07 09:39:17.514041] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:48.986 [2024-12-07 09:39:17.514074] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:48.986 [2024-12-07 09:39:17.514079] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:48.986 passed 00:04:48.986 Test: mem map registration ...[2024-12-07 09:39:17.550659] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:48.986 [2024-12-07 09:39:17.550671] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:48.986 passed 00:04:48.986 Test: mem map adjacent registrations ...passed 00:04:48.986 00:04:48.986 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.986 suites 1 1 n/a 0 0 00:04:48.986 tests 4 4 4 0 0 00:04:48.986 asserts 152 152 152 0 n/a 00:04:48.986 00:04:48.986 Elapsed time = 0.136 seconds 00:04:48.986 00:04:48.986 real 0m0.149s 00:04:48.986 user 0m0.138s 00:04:48.986 sys 0m0.010s 00:04:48.986 09:39:17 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.986 09:39:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:48.986 ************************************ 00:04:48.986 END TEST env_memory 00:04:48.986 ************************************ 00:04:48.986 09:39:17 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.986 09:39:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.986 09:39:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.986 ************************************ 00:04:48.986 START TEST env_vtophys 00:04:48.986 ************************************ 00:04:48.986 09:39:17 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.986 EAL: lib.eal log level changed from notice to debug 00:04:48.986 EAL: Detected lcore 0 as core 0 on socket 0 00:04:48.986 EAL: Detected lcore 1 as core 1 on socket 0 00:04:48.986 EAL: Detected lcore 2 as core 2 on socket 0 00:04:48.986 EAL: Detected lcore 3 as core 3 on socket 0 00:04:48.986 EAL: Detected lcore 4 as core 4 on socket 0 00:04:48.986 EAL: Detected lcore 5 as core 5 on socket 0 00:04:48.986 EAL: Detected lcore 6 as core 6 on socket 0 00:04:48.986 EAL: Detected lcore 7 as core 8 on socket 0 00:04:48.986 EAL: Detected lcore 8 as core 9 on socket 0 00:04:48.986 EAL: Detected lcore 9 as core 10 on socket 0 00:04:48.986 EAL: Detected lcore 10 as core 11 on socket 0 00:04:48.986 EAL: Detected lcore 11 as core 12 on socket 0 00:04:48.986 EAL: Detected lcore 12 as core 13 on socket 0 00:04:48.986 EAL: Detected lcore 13 as core 16 on socket 0 00:04:48.986 EAL: Detected lcore 14 as core 17 on socket 0 00:04:48.986 EAL: Detected lcore 15 as core 18 on socket 0 00:04:48.986 EAL: Detected lcore 16 as core 19 on socket 0 00:04:48.986 EAL: Detected lcore 17 as core 20 on socket 0 00:04:48.986 EAL: Detected lcore 18 as core 21 on socket 0 00:04:48.986 EAL: Detected lcore 19 as core 25 on socket 0 00:04:48.986 EAL: Detected lcore 20 as core 26 on socket 0 00:04:48.986 EAL: Detected lcore 21 as core 27 on socket 0 00:04:48.986 EAL: Detected lcore 22 as core 28 on socket 0 00:04:48.986 EAL: Detected lcore 23 as core 29 on socket 0 00:04:48.986 EAL: Detected lcore 24 as core 0 on socket 1 00:04:48.986 EAL: Detected lcore 25 as core 1 on socket 1 00:04:48.986 EAL: Detected lcore 26 as core 2 on socket 1 00:04:48.986 EAL: Detected lcore 27 as core 3 on socket 1 00:04:48.986 EAL: Detected lcore 28 as core 4 on socket 1 00:04:48.986 EAL: Detected lcore 29 as core 5 on socket 1 00:04:48.986 EAL: Detected lcore 30 as core 6 on socket 1 00:04:48.986 EAL: Detected lcore 31 as core 9 on socket 1 00:04:48.986 EAL: Detected lcore 32 as core 10 on socket 1 00:04:48.986 EAL: Detected lcore 33 as core 11 on socket 1 00:04:48.986 EAL: Detected lcore 34 as core 12 on socket 1 00:04:48.986 EAL: Detected lcore 35 as core 13 on socket 1 00:04:48.986 EAL: Detected lcore 36 as core 16 on socket 1 00:04:48.986 EAL: Detected lcore 37 as core 17 on socket 1 00:04:48.987 EAL: Detected lcore 38 as core 18 on socket 1 00:04:48.987 EAL: Detected lcore 39 as core 19 on socket 1 00:04:48.987 EAL: Detected lcore 40 as core 20 on socket 1 00:04:48.987 EAL: Detected lcore 41 as core 21 on socket 1 00:04:48.987 EAL: Detected lcore 42 as core 24 on socket 1 00:04:48.987 EAL: Detected lcore 43 as core 25 on socket 1 00:04:48.987 EAL: Detected lcore 44 as core 26 on socket 1 00:04:48.987 EAL: Detected lcore 45 as core 27 on socket 1 00:04:48.987 EAL: Detected lcore 46 as core 28 on socket 1 00:04:48.987 EAL: Detected lcore 47 as core 29 on socket 1 00:04:48.987 EAL: Detected lcore 48 as core 0 on socket 0 00:04:48.987 EAL: Detected lcore 49 as core 1 on socket 0 00:04:48.987 EAL: Detected lcore 50 as core 2 on socket 0 00:04:48.987 EAL: Detected lcore 51 as core 3 on socket 0 00:04:48.987 EAL: Detected lcore 52 as core 4 on socket 0 00:04:48.987 EAL: Detected lcore 53 as core 5 on socket 0 00:04:48.987 EAL: Detected lcore 54 as core 6 on socket 0 00:04:48.987 EAL: Detected lcore 55 as core 8 on socket 0 00:04:48.987 EAL: Detected lcore 56 as core 9 on socket 0 00:04:48.987 EAL: Detected lcore 57 as core 10 on socket 0 00:04:48.987 EAL: Detected lcore 58 as core 11 on socket 0 00:04:48.987 EAL: Detected lcore 59 as core 12 on socket 0 00:04:48.987 EAL: Detected lcore 60 as core 13 on socket 0 00:04:48.987 EAL: Detected lcore 61 as core 16 on socket 0 00:04:48.987 EAL: Detected lcore 62 as core 17 on socket 0 00:04:48.987 EAL: Detected lcore 63 as core 18 on socket 0 00:04:48.987 EAL: Detected lcore 64 as core 19 on socket 0 00:04:48.987 EAL: Detected lcore 65 as core 20 on socket 0 00:04:48.987 EAL: Detected lcore 66 as core 21 on socket 0 00:04:48.987 EAL: Detected lcore 67 as core 25 on socket 0 00:04:48.987 EAL: Detected lcore 68 as core 26 on socket 0 00:04:48.987 EAL: Detected lcore 69 as core 27 on socket 0 00:04:48.987 EAL: Detected lcore 70 as core 28 on socket 0 00:04:48.987 EAL: Detected lcore 71 as core 29 on socket 0 00:04:48.987 EAL: Detected lcore 72 as core 0 on socket 1 00:04:48.987 EAL: Detected lcore 73 as core 1 on socket 1 00:04:48.987 EAL: Detected lcore 74 as core 2 on socket 1 00:04:48.987 EAL: Detected lcore 75 as core 3 on socket 1 00:04:48.987 EAL: Detected lcore 76 as core 4 on socket 1 00:04:48.987 EAL: Detected lcore 77 as core 5 on socket 1 00:04:48.987 EAL: Detected lcore 78 as core 6 on socket 1 00:04:48.987 EAL: Detected lcore 79 as core 9 on socket 1 00:04:48.987 EAL: Detected lcore 80 as core 10 on socket 1 00:04:48.987 EAL: Detected lcore 81 as core 11 on socket 1 00:04:48.987 EAL: Detected lcore 82 as core 12 on socket 1 00:04:48.987 EAL: Detected lcore 83 as core 13 on socket 1 00:04:48.987 EAL: Detected lcore 84 as core 16 on socket 1 00:04:48.987 EAL: Detected lcore 85 as core 17 on socket 1 00:04:48.987 EAL: Detected lcore 86 as core 18 on socket 1 00:04:48.987 EAL: Detected lcore 87 as core 19 on socket 1 00:04:48.987 EAL: Detected lcore 88 as core 20 on socket 1 00:04:48.987 EAL: Detected lcore 89 as core 21 on socket 1 00:04:48.987 EAL: Detected lcore 90 as core 24 on socket 1 00:04:48.987 EAL: Detected lcore 91 as core 25 on socket 1 00:04:48.987 EAL: Detected lcore 92 as core 26 on socket 1 00:04:48.987 EAL: Detected lcore 93 as core 27 on socket 1 00:04:48.987 EAL: Detected lcore 94 as core 28 on socket 1 00:04:48.987 EAL: Detected lcore 95 as core 29 on socket 1 00:04:48.987 EAL: Maximum logical cores by configuration: 128 00:04:48.987 EAL: Detected CPU lcores: 96 00:04:48.987 EAL: Detected NUMA nodes: 2 00:04:48.987 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:48.987 EAL: Detected shared linkage of DPDK 00:04:48.987 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:48.987 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:48.987 EAL: Registered [vdev] bus. 00:04:48.987 EAL: bus.vdev log level changed from disabled to notice 00:04:48.987 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:48.987 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:48.987 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:48.987 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:48.987 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:48.987 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:48.987 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:48.987 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:48.987 EAL: No shared files mode enabled, IPC will be disabled 00:04:48.987 EAL: No shared files mode enabled, IPC is disabled 00:04:48.987 EAL: Bus pci wants IOVA as 'DC' 00:04:48.987 EAL: Bus vdev wants IOVA as 'DC' 00:04:48.987 EAL: Buses did not request a specific IOVA mode. 00:04:48.987 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:48.987 EAL: Selected IOVA mode 'VA' 00:04:48.987 EAL: Probing VFIO support... 00:04:48.987 EAL: IOMMU type 1 (Type 1) is supported 00:04:48.987 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:48.987 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:48.987 EAL: VFIO support initialized 00:04:48.987 EAL: Ask a virtual area of 0x2e000 bytes 00:04:48.987 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:48.987 EAL: Setting up physically contiguous memory... 00:04:48.987 EAL: Setting maximum number of open files to 524288 00:04:48.987 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:48.987 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:48.987 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:48.987 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.987 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:48.987 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.987 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.987 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:48.987 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:48.987 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.987 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:48.987 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.987 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.987 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:48.987 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:48.987 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.987 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:48.987 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.987 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.987 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:48.987 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:48.987 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.987 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:48.987 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.987 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.987 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:48.987 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:48.987 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:48.987 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.987 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:48.987 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.987 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.987 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:48.987 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:48.987 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.987 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:48.987 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.987 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.987 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:48.987 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:48.987 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.987 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:48.987 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.987 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.987 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:48.987 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:48.987 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.987 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:48.987 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.987 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.987 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:48.987 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:48.987 EAL: Hugepages will be freed exactly as allocated. 00:04:48.987 EAL: No shared files mode enabled, IPC is disabled 00:04:48.987 EAL: No shared files mode enabled, IPC is disabled 00:04:48.987 EAL: TSC frequency is ~2300000 KHz 00:04:48.987 EAL: Main lcore 0 is ready (tid=7f128d3b1a00;cpuset=[0]) 00:04:48.987 EAL: Trying to obtain current memory policy. 00:04:48.987 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.987 EAL: Restoring previous memory policy: 0 00:04:48.987 EAL: request: mp_malloc_sync 00:04:48.987 EAL: No shared files mode enabled, IPC is disabled 00:04:48.987 EAL: Heap on socket 0 was expanded by 2MB 00:04:48.987 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:48.987 EAL: probe driver: 8086:37d2 net_i40e 00:04:48.987 EAL: Not managed by a supported kernel driver, skipped 00:04:48.987 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:48.987 EAL: probe driver: 8086:37d2 net_i40e 00:04:48.987 EAL: Not managed by a supported kernel driver, skipped 00:04:48.987 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:49.246 EAL: Mem event callback 'spdk:(nil)' registered 00:04:49.246 00:04:49.246 00:04:49.246 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.246 http://cunit.sourceforge.net/ 00:04:49.246 00:04:49.246 00:04:49.246 Suite: components_suite 00:04:49.246 Test: vtophys_malloc_test ...passed 00:04:49.246 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:49.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.246 EAL: Restoring previous memory policy: 4 00:04:49.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.246 EAL: request: mp_malloc_sync 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: Heap on socket 0 was expanded by 4MB 00:04:49.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.246 EAL: request: mp_malloc_sync 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: Heap on socket 0 was shrunk by 4MB 00:04:49.246 EAL: Trying to obtain current memory policy. 00:04:49.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.246 EAL: Restoring previous memory policy: 4 00:04:49.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.246 EAL: request: mp_malloc_sync 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: Heap on socket 0 was expanded by 6MB 00:04:49.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.246 EAL: request: mp_malloc_sync 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: Heap on socket 0 was shrunk by 6MB 00:04:49.246 EAL: Trying to obtain current memory policy. 00:04:49.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.246 EAL: Restoring previous memory policy: 4 00:04:49.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.246 EAL: request: mp_malloc_sync 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: Heap on socket 0 was expanded by 10MB 00:04:49.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.246 EAL: request: mp_malloc_sync 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: Heap on socket 0 was shrunk by 10MB 00:04:49.246 EAL: Trying to obtain current memory policy. 00:04:49.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.246 EAL: Restoring previous memory policy: 4 00:04:49.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.246 EAL: request: mp_malloc_sync 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: Heap on socket 0 was expanded by 18MB 00:04:49.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.246 EAL: request: mp_malloc_sync 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: Heap on socket 0 was shrunk by 18MB 00:04:49.246 EAL: Trying to obtain current memory policy. 00:04:49.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.246 EAL: Restoring previous memory policy: 4 00:04:49.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.246 EAL: request: mp_malloc_sync 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: Heap on socket 0 was expanded by 34MB 00:04:49.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.246 EAL: request: mp_malloc_sync 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: Heap on socket 0 was shrunk by 34MB 00:04:49.246 EAL: Trying to obtain current memory policy. 00:04:49.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.246 EAL: Restoring previous memory policy: 4 00:04:49.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.246 EAL: request: mp_malloc_sync 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: Heap on socket 0 was expanded by 66MB 00:04:49.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.246 EAL: request: mp_malloc_sync 00:04:49.246 EAL: No shared files mode enabled, IPC is disabled 00:04:49.246 EAL: Heap on socket 0 was shrunk by 66MB 00:04:49.246 EAL: Trying to obtain current memory policy. 00:04:49.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.247 EAL: Restoring previous memory policy: 4 00:04:49.247 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.247 EAL: request: mp_malloc_sync 00:04:49.247 EAL: No shared files mode enabled, IPC is disabled 00:04:49.247 EAL: Heap on socket 0 was expanded by 130MB 00:04:49.247 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.247 EAL: request: mp_malloc_sync 00:04:49.247 EAL: No shared files mode enabled, IPC is disabled 00:04:49.247 EAL: Heap on socket 0 was shrunk by 130MB 00:04:49.247 EAL: Trying to obtain current memory policy. 00:04:49.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.247 EAL: Restoring previous memory policy: 4 00:04:49.247 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.247 EAL: request: mp_malloc_sync 00:04:49.247 EAL: No shared files mode enabled, IPC is disabled 00:04:49.247 EAL: Heap on socket 0 was expanded by 258MB 00:04:49.247 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.247 EAL: request: mp_malloc_sync 00:04:49.247 EAL: No shared files mode enabled, IPC is disabled 00:04:49.247 EAL: Heap on socket 0 was shrunk by 258MB 00:04:49.247 EAL: Trying to obtain current memory policy. 00:04:49.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.505 EAL: Restoring previous memory policy: 4 00:04:49.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.505 EAL: request: mp_malloc_sync 00:04:49.505 EAL: No shared files mode enabled, IPC is disabled 00:04:49.505 EAL: Heap on socket 0 was expanded by 514MB 00:04:49.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.505 EAL: request: mp_malloc_sync 00:04:49.505 EAL: No shared files mode enabled, IPC is disabled 00:04:49.505 EAL: Heap on socket 0 was shrunk by 514MB 00:04:49.505 EAL: Trying to obtain current memory policy. 00:04:49.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.763 EAL: Restoring previous memory policy: 4 00:04:49.763 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.763 EAL: request: mp_malloc_sync 00:04:49.763 EAL: No shared files mode enabled, IPC is disabled 00:04:49.763 EAL: Heap on socket 0 was expanded by 1026MB 00:04:50.020 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.020 EAL: request: mp_malloc_sync 00:04:50.020 EAL: No shared files mode enabled, IPC is disabled 00:04:50.020 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:50.020 passed 00:04:50.020 00:04:50.020 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.020 suites 1 1 n/a 0 0 00:04:50.020 tests 2 2 2 0 0 00:04:50.020 asserts 497 497 497 0 n/a 00:04:50.020 00:04:50.020 Elapsed time = 0.962 seconds 00:04:50.020 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.020 EAL: request: mp_malloc_sync 00:04:50.020 EAL: No shared files mode enabled, IPC is disabled 00:04:50.020 EAL: Heap on socket 0 was shrunk by 2MB 00:04:50.020 EAL: No shared files mode enabled, IPC is disabled 00:04:50.020 EAL: No shared files mode enabled, IPC is disabled 00:04:50.020 EAL: No shared files mode enabled, IPC is disabled 00:04:50.020 00:04:50.020 real 0m1.074s 00:04:50.020 user 0m0.628s 00:04:50.020 sys 0m0.415s 00:04:50.020 09:39:18 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.020 09:39:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:50.020 ************************************ 00:04:50.020 END TEST env_vtophys 00:04:50.020 ************************************ 00:04:50.278 09:39:18 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.278 09:39:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.278 09:39:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.278 09:39:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.278 ************************************ 00:04:50.278 START TEST env_pci 00:04:50.278 ************************************ 00:04:50.278 09:39:18 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.278 00:04:50.278 00:04:50.278 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.278 http://cunit.sourceforge.net/ 00:04:50.278 00:04:50.278 00:04:50.278 Suite: pci 00:04:50.278 Test: pci_hook ...[2024-12-07 09:39:18.801574] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1037588 has claimed it 00:04:50.278 EAL: Cannot find device (10000:00:01.0) 00:04:50.278 EAL: Failed to attach device on primary process 00:04:50.278 passed 00:04:50.278 00:04:50.278 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.278 suites 1 1 n/a 0 0 00:04:50.278 tests 1 1 1 0 0 00:04:50.278 asserts 25 25 25 0 n/a 00:04:50.278 00:04:50.278 Elapsed time = 0.018 seconds 00:04:50.278 00:04:50.278 real 0m0.027s 00:04:50.278 user 0m0.004s 00:04:50.278 sys 0m0.022s 00:04:50.278 09:39:18 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.278 09:39:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:50.278 ************************************ 00:04:50.278 END TEST env_pci 00:04:50.278 ************************************ 00:04:50.278 09:39:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:50.278 09:39:18 env -- env/env.sh@15 -- # uname 00:04:50.278 09:39:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:50.279 09:39:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:50.279 09:39:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.279 09:39:18 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:50.279 09:39:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.279 09:39:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.279 ************************************ 00:04:50.279 START TEST env_dpdk_post_init 00:04:50.279 ************************************ 00:04:50.279 09:39:18 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.279 EAL: Detected CPU lcores: 96 00:04:50.279 EAL: Detected NUMA nodes: 2 00:04:50.279 EAL: Detected shared linkage of DPDK 00:04:50.279 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:50.279 EAL: Selected IOVA mode 'VA' 00:04:50.279 EAL: VFIO support initialized 00:04:50.279 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:50.279 EAL: Using IOMMU type 1 (Type 1) 00:04:50.279 EAL: Ignore mapping IO port bar(1) 00:04:50.279 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:50.279 EAL: Ignore mapping IO port bar(1) 00:04:50.279 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:50.537 EAL: Ignore mapping IO port bar(1) 00:04:50.538 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:50.538 EAL: Ignore mapping IO port bar(1) 00:04:50.538 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:50.538 EAL: Ignore mapping IO port bar(1) 00:04:50.538 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:50.538 EAL: Ignore mapping IO port bar(1) 00:04:50.538 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:50.538 EAL: Ignore mapping IO port bar(1) 00:04:50.538 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:50.538 EAL: Ignore mapping IO port bar(1) 00:04:50.538 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:51.106 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:51.106 EAL: Ignore mapping IO port bar(1) 00:04:51.106 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:51.366 EAL: Ignore mapping IO port bar(1) 00:04:51.366 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:51.366 EAL: Ignore mapping IO port bar(1) 00:04:51.366 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:51.366 EAL: Ignore mapping IO port bar(1) 00:04:51.366 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:51.366 EAL: Ignore mapping IO port bar(1) 00:04:51.366 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:51.366 EAL: Ignore mapping IO port bar(1) 00:04:51.366 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:51.366 EAL: Ignore mapping IO port bar(1) 00:04:51.366 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:51.366 EAL: Ignore mapping IO port bar(1) 00:04:51.366 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:54.656 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:54.656 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:54.656 Starting DPDK initialization... 00:04:54.656 Starting SPDK post initialization... 00:04:54.656 SPDK NVMe probe 00:04:54.656 Attaching to 0000:5e:00.0 00:04:54.656 Attached to 0000:5e:00.0 00:04:54.656 Cleaning up... 00:04:54.656 00:04:54.656 real 0m4.288s 00:04:54.656 user 0m3.216s 00:04:54.656 sys 0m0.146s 00:04:54.656 09:39:23 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.656 09:39:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.656 ************************************ 00:04:54.656 END TEST env_dpdk_post_init 00:04:54.656 ************************************ 00:04:54.656 09:39:23 env -- env/env.sh@26 -- # uname 00:04:54.656 09:39:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:54.656 09:39:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:54.656 09:39:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.656 09:39:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.656 09:39:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.656 ************************************ 00:04:54.656 START TEST env_mem_callbacks 00:04:54.656 ************************************ 00:04:54.656 09:39:23 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:54.656 EAL: Detected CPU lcores: 96 00:04:54.656 EAL: Detected NUMA nodes: 2 00:04:54.656 EAL: Detected shared linkage of DPDK 00:04:54.656 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.656 EAL: Selected IOVA mode 'VA' 00:04:54.656 EAL: VFIO support initialized 00:04:54.656 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.656 00:04:54.656 00:04:54.656 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.656 http://cunit.sourceforge.net/ 00:04:54.656 00:04:54.656 00:04:54.656 Suite: memory 00:04:54.656 Test: test ... 00:04:54.657 register 0x200000200000 2097152 00:04:54.657 malloc 3145728 00:04:54.657 register 0x200000400000 4194304 00:04:54.657 buf 0x200000500000 len 3145728 PASSED 00:04:54.657 malloc 64 00:04:54.657 buf 0x2000004fff40 len 64 PASSED 00:04:54.657 malloc 4194304 00:04:54.657 register 0x200000800000 6291456 00:04:54.657 buf 0x200000a00000 len 4194304 PASSED 00:04:54.657 free 0x200000500000 3145728 00:04:54.657 free 0x2000004fff40 64 00:04:54.657 unregister 0x200000400000 4194304 PASSED 00:04:54.657 free 0x200000a00000 4194304 00:04:54.657 unregister 0x200000800000 6291456 PASSED 00:04:54.657 malloc 8388608 00:04:54.657 register 0x200000400000 10485760 00:04:54.657 buf 0x200000600000 len 8388608 PASSED 00:04:54.657 free 0x200000600000 8388608 00:04:54.657 unregister 0x200000400000 10485760 PASSED 00:04:54.657 passed 00:04:54.657 00:04:54.657 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.657 suites 1 1 n/a 0 0 00:04:54.657 tests 1 1 1 0 0 00:04:54.657 asserts 15 15 15 0 n/a 00:04:54.657 00:04:54.657 Elapsed time = 0.005 seconds 00:04:54.657 00:04:54.657 real 0m0.034s 00:04:54.657 user 0m0.009s 00:04:54.657 sys 0m0.025s 00:04:54.657 09:39:23 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.657 09:39:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:54.657 ************************************ 00:04:54.657 END TEST env_mem_callbacks 00:04:54.657 ************************************ 00:04:54.657 00:04:54.657 real 0m6.069s 00:04:54.657 user 0m4.216s 00:04:54.657 sys 0m0.933s 00:04:54.657 09:39:23 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.657 09:39:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.657 ************************************ 00:04:54.657 END TEST env 00:04:54.657 ************************************ 00:04:54.657 09:39:23 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:54.657 09:39:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.657 09:39:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.657 09:39:23 -- common/autotest_common.sh@10 -- # set +x 00:04:54.917 ************************************ 00:04:54.917 START TEST rpc 00:04:54.917 ************************************ 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:54.917 * Looking for test storage... 00:04:54.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:54.917 09:39:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.917 09:39:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.917 09:39:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.917 09:39:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.917 09:39:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.917 09:39:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.917 09:39:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.917 09:39:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.917 09:39:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.917 09:39:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.917 09:39:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.917 09:39:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:54.917 09:39:23 rpc -- scripts/common.sh@345 -- # : 1 00:04:54.917 09:39:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.917 09:39:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.917 09:39:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:54.917 09:39:23 rpc -- scripts/common.sh@353 -- # local d=1 00:04:54.917 09:39:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.917 09:39:23 rpc -- scripts/common.sh@355 -- # echo 1 00:04:54.917 09:39:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.917 09:39:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:54.917 09:39:23 rpc -- scripts/common.sh@353 -- # local d=2 00:04:54.917 09:39:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.917 09:39:23 rpc -- scripts/common.sh@355 -- # echo 2 00:04:54.917 09:39:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.917 09:39:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.917 09:39:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.917 09:39:23 rpc -- scripts/common.sh@368 -- # return 0 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:54.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.917 --rc genhtml_branch_coverage=1 00:04:54.917 --rc genhtml_function_coverage=1 00:04:54.917 --rc genhtml_legend=1 00:04:54.917 --rc geninfo_all_blocks=1 00:04:54.917 --rc geninfo_unexecuted_blocks=1 00:04:54.917 00:04:54.917 ' 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:54.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.917 --rc genhtml_branch_coverage=1 00:04:54.917 --rc genhtml_function_coverage=1 00:04:54.917 --rc genhtml_legend=1 00:04:54.917 --rc geninfo_all_blocks=1 00:04:54.917 --rc geninfo_unexecuted_blocks=1 00:04:54.917 00:04:54.917 ' 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:54.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.917 --rc genhtml_branch_coverage=1 00:04:54.917 --rc genhtml_function_coverage=1 00:04:54.917 --rc genhtml_legend=1 00:04:54.917 --rc geninfo_all_blocks=1 00:04:54.917 --rc geninfo_unexecuted_blocks=1 00:04:54.917 00:04:54.917 ' 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:54.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.917 --rc genhtml_branch_coverage=1 00:04:54.917 --rc genhtml_function_coverage=1 00:04:54.917 --rc genhtml_legend=1 00:04:54.917 --rc geninfo_all_blocks=1 00:04:54.917 --rc geninfo_unexecuted_blocks=1 00:04:54.917 00:04:54.917 ' 00:04:54.917 09:39:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1038462 00:04:54.917 09:39:23 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:54.917 09:39:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.917 09:39:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1038462 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@831 -- # '[' -z 1038462 ']' 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.917 09:39:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.917 [2024-12-07 09:39:23.612611] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:04:54.917 [2024-12-07 09:39:23.612659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1038462 ] 00:04:55.219 [2024-12-07 09:39:23.668504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.219 [2024-12-07 09:39:23.707550] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:55.219 [2024-12-07 09:39:23.707593] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1038462' to capture a snapshot of events at runtime. 00:04:55.219 [2024-12-07 09:39:23.707601] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:55.219 [2024-12-07 09:39:23.707607] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:55.219 [2024-12-07 09:39:23.707613] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1038462 for offline analysis/debug. 00:04:55.219 [2024-12-07 09:39:23.707634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.219 09:39:23 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.219 09:39:23 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:55.219 09:39:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.219 09:39:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.219 09:39:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:55.219 09:39:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:55.219 09:39:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.219 09:39:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.219 09:39:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.502 ************************************ 00:04:55.502 START TEST rpc_integrity 00:04:55.502 ************************************ 00:04:55.502 09:39:23 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:55.502 09:39:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:55.502 09:39:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.502 09:39:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.502 09:39:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.502 09:39:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:55.502 09:39:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:55.502 09:39:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:55.502 09:39:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:55.502 09:39:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.502 09:39:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.502 09:39:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.502 09:39:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:55.502 09:39:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:55.502 09:39:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.502 09:39:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:55.502 { 00:04:55.502 "name": "Malloc0", 00:04:55.502 "aliases": [ 00:04:55.502 "f156b784-e946-4840-a7b0-78f550b81b97" 00:04:55.502 ], 00:04:55.502 "product_name": "Malloc disk", 00:04:55.502 "block_size": 512, 00:04:55.502 "num_blocks": 16384, 00:04:55.502 "uuid": "f156b784-e946-4840-a7b0-78f550b81b97", 00:04:55.502 "assigned_rate_limits": { 00:04:55.502 "rw_ios_per_sec": 0, 00:04:55.502 "rw_mbytes_per_sec": 0, 00:04:55.502 "r_mbytes_per_sec": 0, 00:04:55.502 "w_mbytes_per_sec": 0 00:04:55.502 }, 00:04:55.502 "claimed": false, 00:04:55.502 "zoned": false, 00:04:55.502 "supported_io_types": { 00:04:55.502 "read": true, 00:04:55.502 "write": true, 00:04:55.502 "unmap": true, 00:04:55.502 "flush": true, 00:04:55.502 "reset": true, 00:04:55.502 "nvme_admin": false, 00:04:55.502 "nvme_io": false, 00:04:55.502 "nvme_io_md": false, 00:04:55.502 "write_zeroes": true, 00:04:55.502 "zcopy": true, 00:04:55.502 "get_zone_info": false, 00:04:55.502 "zone_management": false, 00:04:55.502 "zone_append": false, 00:04:55.502 "compare": false, 00:04:55.502 "compare_and_write": false, 00:04:55.502 "abort": true, 00:04:55.502 "seek_hole": false, 00:04:55.502 "seek_data": false, 00:04:55.502 "copy": true, 00:04:55.502 "nvme_iov_md": false 00:04:55.502 }, 00:04:55.502 "memory_domains": [ 00:04:55.502 { 00:04:55.502 "dma_device_id": "system", 00:04:55.502 "dma_device_type": 1 00:04:55.502 }, 00:04:55.502 { 00:04:55.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.502 "dma_device_type": 2 00:04:55.502 } 00:04:55.502 ], 00:04:55.502 "driver_specific": {} 00:04:55.502 } 00:04:55.502 ]' 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.502 [2024-12-07 09:39:24.058728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:55.502 [2024-12-07 09:39:24.058760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:55.502 [2024-12-07 09:39:24.058774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cd2e00 00:04:55.502 [2024-12-07 09:39:24.058780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:55.502 [2024-12-07 09:39:24.059860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:55.502 [2024-12-07 09:39:24.059882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:55.502 Passthru0 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:55.502 { 00:04:55.502 "name": "Malloc0", 00:04:55.502 "aliases": [ 00:04:55.502 "f156b784-e946-4840-a7b0-78f550b81b97" 00:04:55.502 ], 00:04:55.502 "product_name": "Malloc disk", 00:04:55.502 "block_size": 512, 00:04:55.502 "num_blocks": 16384, 00:04:55.502 "uuid": "f156b784-e946-4840-a7b0-78f550b81b97", 00:04:55.502 "assigned_rate_limits": { 00:04:55.502 "rw_ios_per_sec": 0, 00:04:55.502 "rw_mbytes_per_sec": 0, 00:04:55.502 "r_mbytes_per_sec": 0, 00:04:55.502 "w_mbytes_per_sec": 0 00:04:55.502 }, 00:04:55.502 "claimed": true, 00:04:55.502 "claim_type": "exclusive_write", 00:04:55.502 "zoned": false, 00:04:55.502 "supported_io_types": { 00:04:55.502 "read": true, 00:04:55.502 "write": true, 00:04:55.502 "unmap": true, 00:04:55.502 "flush": true, 00:04:55.502 "reset": true, 00:04:55.502 "nvme_admin": false, 00:04:55.502 "nvme_io": false, 00:04:55.502 "nvme_io_md": false, 00:04:55.502 "write_zeroes": true, 00:04:55.502 "zcopy": true, 00:04:55.502 "get_zone_info": false, 00:04:55.502 "zone_management": false, 00:04:55.502 "zone_append": false, 00:04:55.502 "compare": false, 00:04:55.502 "compare_and_write": false, 00:04:55.502 "abort": true, 00:04:55.502 "seek_hole": false, 00:04:55.502 "seek_data": false, 00:04:55.502 "copy": true, 00:04:55.502 "nvme_iov_md": false 00:04:55.502 }, 00:04:55.502 "memory_domains": [ 00:04:55.502 { 00:04:55.502 "dma_device_id": "system", 00:04:55.502 "dma_device_type": 1 00:04:55.502 }, 00:04:55.502 { 00:04:55.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.502 "dma_device_type": 2 00:04:55.502 } 00:04:55.502 ], 00:04:55.502 "driver_specific": {} 00:04:55.502 }, 00:04:55.502 { 00:04:55.502 "name": "Passthru0", 00:04:55.502 "aliases": [ 00:04:55.502 "2935b268-6424-534e-b448-c2e63b7ff716" 00:04:55.502 ], 00:04:55.502 "product_name": "passthru", 00:04:55.502 "block_size": 512, 00:04:55.502 "num_blocks": 16384, 00:04:55.502 "uuid": "2935b268-6424-534e-b448-c2e63b7ff716", 00:04:55.502 "assigned_rate_limits": { 00:04:55.502 "rw_ios_per_sec": 0, 00:04:55.502 "rw_mbytes_per_sec": 0, 00:04:55.502 "r_mbytes_per_sec": 0, 00:04:55.502 "w_mbytes_per_sec": 0 00:04:55.502 }, 00:04:55.502 "claimed": false, 00:04:55.502 "zoned": false, 00:04:55.502 "supported_io_types": { 00:04:55.502 "read": true, 00:04:55.502 "write": true, 00:04:55.502 "unmap": true, 00:04:55.502 "flush": true, 00:04:55.502 "reset": true, 00:04:55.502 "nvme_admin": false, 00:04:55.502 "nvme_io": false, 00:04:55.502 "nvme_io_md": false, 00:04:55.502 "write_zeroes": true, 00:04:55.502 "zcopy": true, 00:04:55.502 "get_zone_info": false, 00:04:55.502 "zone_management": false, 00:04:55.502 "zone_append": false, 00:04:55.502 "compare": false, 00:04:55.502 "compare_and_write": false, 00:04:55.502 "abort": true, 00:04:55.502 "seek_hole": false, 00:04:55.502 "seek_data": false, 00:04:55.502 "copy": true, 00:04:55.502 "nvme_iov_md": false 00:04:55.502 }, 00:04:55.502 "memory_domains": [ 00:04:55.502 { 00:04:55.502 "dma_device_id": "system", 00:04:55.502 "dma_device_type": 1 00:04:55.502 }, 00:04:55.502 { 00:04:55.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.502 "dma_device_type": 2 00:04:55.502 } 00:04:55.502 ], 00:04:55.502 "driver_specific": { 00:04:55.502 "passthru": { 00:04:55.502 "name": "Passthru0", 00:04:55.502 "base_bdev_name": "Malloc0" 00:04:55.502 } 00:04:55.502 } 00:04:55.502 } 00:04:55.502 ]' 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.502 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:55.502 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:55.503 09:39:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:55.503 00:04:55.503 real 0m0.274s 00:04:55.503 user 0m0.163s 00:04:55.503 sys 0m0.045s 00:04:55.503 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.503 09:39:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.503 ************************************ 00:04:55.503 END TEST rpc_integrity 00:04:55.503 ************************************ 00:04:55.780 09:39:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:55.780 09:39:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.780 09:39:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.780 09:39:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.780 ************************************ 00:04:55.780 START TEST rpc_plugins 00:04:55.780 ************************************ 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:55.780 09:39:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.780 09:39:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:55.780 09:39:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.780 09:39:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:55.780 { 00:04:55.780 "name": "Malloc1", 00:04:55.780 "aliases": [ 00:04:55.780 "22c2b6b0-feed-4094-ada6-122e5fad68a3" 00:04:55.780 ], 00:04:55.780 "product_name": "Malloc disk", 00:04:55.780 "block_size": 4096, 00:04:55.780 "num_blocks": 256, 00:04:55.780 "uuid": "22c2b6b0-feed-4094-ada6-122e5fad68a3", 00:04:55.780 "assigned_rate_limits": { 00:04:55.780 "rw_ios_per_sec": 0, 00:04:55.780 "rw_mbytes_per_sec": 0, 00:04:55.780 "r_mbytes_per_sec": 0, 00:04:55.780 "w_mbytes_per_sec": 0 00:04:55.780 }, 00:04:55.780 "claimed": false, 00:04:55.780 "zoned": false, 00:04:55.780 "supported_io_types": { 00:04:55.780 "read": true, 00:04:55.780 "write": true, 00:04:55.780 "unmap": true, 00:04:55.780 "flush": true, 00:04:55.780 "reset": true, 00:04:55.780 "nvme_admin": false, 00:04:55.780 "nvme_io": false, 00:04:55.780 "nvme_io_md": false, 00:04:55.780 "write_zeroes": true, 00:04:55.780 "zcopy": true, 00:04:55.780 "get_zone_info": false, 00:04:55.780 "zone_management": false, 00:04:55.780 "zone_append": false, 00:04:55.780 "compare": false, 00:04:55.780 "compare_and_write": false, 00:04:55.780 "abort": true, 00:04:55.780 "seek_hole": false, 00:04:55.780 "seek_data": false, 00:04:55.780 "copy": true, 00:04:55.780 "nvme_iov_md": false 00:04:55.780 }, 00:04:55.780 "memory_domains": [ 00:04:55.780 { 00:04:55.780 "dma_device_id": "system", 00:04:55.780 "dma_device_type": 1 00:04:55.780 }, 00:04:55.780 { 00:04:55.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.780 "dma_device_type": 2 00:04:55.780 } 00:04:55.780 ], 00:04:55.780 "driver_specific": {} 00:04:55.780 } 00:04:55.780 ]' 00:04:55.780 09:39:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:55.780 09:39:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:55.780 09:39:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.780 09:39:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.780 09:39:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:55.780 09:39:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:55.780 09:39:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:55.780 00:04:55.780 real 0m0.142s 00:04:55.780 user 0m0.089s 00:04:55.780 sys 0m0.019s 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.780 09:39:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:55.780 ************************************ 00:04:55.780 END TEST rpc_plugins 00:04:55.780 ************************************ 00:04:55.780 09:39:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:55.780 09:39:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.780 09:39:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.780 09:39:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.780 ************************************ 00:04:55.780 START TEST rpc_trace_cmd_test 00:04:55.780 ************************************ 00:04:55.780 09:39:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:55.780 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:55.780 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:55.780 09:39:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.780 09:39:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:56.063 09:39:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.063 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:56.063 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1038462", 00:04:56.063 "tpoint_group_mask": "0x8", 00:04:56.063 "iscsi_conn": { 00:04:56.063 "mask": "0x2", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "scsi": { 00:04:56.063 "mask": "0x4", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "bdev": { 00:04:56.063 "mask": "0x8", 00:04:56.063 "tpoint_mask": "0xffffffffffffffff" 00:04:56.063 }, 00:04:56.063 "nvmf_rdma": { 00:04:56.063 "mask": "0x10", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "nvmf_tcp": { 00:04:56.063 "mask": "0x20", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "ftl": { 00:04:56.063 "mask": "0x40", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "blobfs": { 00:04:56.063 "mask": "0x80", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "dsa": { 00:04:56.063 "mask": "0x200", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "thread": { 00:04:56.063 "mask": "0x400", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "nvme_pcie": { 00:04:56.063 "mask": "0x800", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "iaa": { 00:04:56.063 "mask": "0x1000", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "nvme_tcp": { 00:04:56.063 "mask": "0x2000", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "bdev_nvme": { 00:04:56.063 "mask": "0x4000", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "sock": { 00:04:56.063 "mask": "0x8000", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "blob": { 00:04:56.063 "mask": "0x10000", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 }, 00:04:56.063 "bdev_raid": { 00:04:56.063 "mask": "0x20000", 00:04:56.063 "tpoint_mask": "0x0" 00:04:56.063 } 00:04:56.063 }' 00:04:56.063 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:56.063 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:56.063 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:56.063 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:56.063 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:56.063 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:56.063 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:56.064 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:56.064 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:56.064 09:39:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:56.064 00:04:56.064 real 0m0.228s 00:04:56.064 user 0m0.187s 00:04:56.064 sys 0m0.031s 00:04:56.064 09:39:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.064 09:39:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:56.064 ************************************ 00:04:56.064 END TEST rpc_trace_cmd_test 00:04:56.064 ************************************ 00:04:56.064 09:39:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:56.064 09:39:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:56.064 09:39:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:56.064 09:39:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.064 09:39:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.064 09:39:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.064 ************************************ 00:04:56.064 START TEST rpc_daemon_integrity 00:04:56.064 ************************************ 00:04:56.064 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:56.064 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.064 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.064 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.372 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.372 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.372 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:56.372 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.372 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.372 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.372 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.372 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.372 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:56.372 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.373 { 00:04:56.373 "name": "Malloc2", 00:04:56.373 "aliases": [ 00:04:56.373 "41e22ea5-62eb-4586-a993-ed99af2c48b1" 00:04:56.373 ], 00:04:56.373 "product_name": "Malloc disk", 00:04:56.373 "block_size": 512, 00:04:56.373 "num_blocks": 16384, 00:04:56.373 "uuid": "41e22ea5-62eb-4586-a993-ed99af2c48b1", 00:04:56.373 "assigned_rate_limits": { 00:04:56.373 "rw_ios_per_sec": 0, 00:04:56.373 "rw_mbytes_per_sec": 0, 00:04:56.373 "r_mbytes_per_sec": 0, 00:04:56.373 "w_mbytes_per_sec": 0 00:04:56.373 }, 00:04:56.373 "claimed": false, 00:04:56.373 "zoned": false, 00:04:56.373 "supported_io_types": { 00:04:56.373 "read": true, 00:04:56.373 "write": true, 00:04:56.373 "unmap": true, 00:04:56.373 "flush": true, 00:04:56.373 "reset": true, 00:04:56.373 "nvme_admin": false, 00:04:56.373 "nvme_io": false, 00:04:56.373 "nvme_io_md": false, 00:04:56.373 "write_zeroes": true, 00:04:56.373 "zcopy": true, 00:04:56.373 "get_zone_info": false, 00:04:56.373 "zone_management": false, 00:04:56.373 "zone_append": false, 00:04:56.373 "compare": false, 00:04:56.373 "compare_and_write": false, 00:04:56.373 "abort": true, 00:04:56.373 "seek_hole": false, 00:04:56.373 "seek_data": false, 00:04:56.373 "copy": true, 00:04:56.373 "nvme_iov_md": false 00:04:56.373 }, 00:04:56.373 "memory_domains": [ 00:04:56.373 { 00:04:56.373 "dma_device_id": "system", 00:04:56.373 "dma_device_type": 1 00:04:56.373 }, 00:04:56.373 { 00:04:56.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.373 "dma_device_type": 2 00:04:56.373 } 00:04:56.373 ], 00:04:56.373 "driver_specific": {} 00:04:56.373 } 00:04:56.373 ]' 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.373 [2024-12-07 09:39:24.905034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:56.373 [2024-12-07 09:39:24.905063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.373 [2024-12-07 09:39:24.905075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b904b0 00:04:56.373 [2024-12-07 09:39:24.905081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.373 [2024-12-07 09:39:24.906041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.373 [2024-12-07 09:39:24.906062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.373 Passthru0 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.373 { 00:04:56.373 "name": "Malloc2", 00:04:56.373 "aliases": [ 00:04:56.373 "41e22ea5-62eb-4586-a993-ed99af2c48b1" 00:04:56.373 ], 00:04:56.373 "product_name": "Malloc disk", 00:04:56.373 "block_size": 512, 00:04:56.373 "num_blocks": 16384, 00:04:56.373 "uuid": "41e22ea5-62eb-4586-a993-ed99af2c48b1", 00:04:56.373 "assigned_rate_limits": { 00:04:56.373 "rw_ios_per_sec": 0, 00:04:56.373 "rw_mbytes_per_sec": 0, 00:04:56.373 "r_mbytes_per_sec": 0, 00:04:56.373 "w_mbytes_per_sec": 0 00:04:56.373 }, 00:04:56.373 "claimed": true, 00:04:56.373 "claim_type": "exclusive_write", 00:04:56.373 "zoned": false, 00:04:56.373 "supported_io_types": { 00:04:56.373 "read": true, 00:04:56.373 "write": true, 00:04:56.373 "unmap": true, 00:04:56.373 "flush": true, 00:04:56.373 "reset": true, 00:04:56.373 "nvme_admin": false, 00:04:56.373 "nvme_io": false, 00:04:56.373 "nvme_io_md": false, 00:04:56.373 "write_zeroes": true, 00:04:56.373 "zcopy": true, 00:04:56.373 "get_zone_info": false, 00:04:56.373 "zone_management": false, 00:04:56.373 "zone_append": false, 00:04:56.373 "compare": false, 00:04:56.373 "compare_and_write": false, 00:04:56.373 "abort": true, 00:04:56.373 "seek_hole": false, 00:04:56.373 "seek_data": false, 00:04:56.373 "copy": true, 00:04:56.373 "nvme_iov_md": false 00:04:56.373 }, 00:04:56.373 "memory_domains": [ 00:04:56.373 { 00:04:56.373 "dma_device_id": "system", 00:04:56.373 "dma_device_type": 1 00:04:56.373 }, 00:04:56.373 { 00:04:56.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.373 "dma_device_type": 2 00:04:56.373 } 00:04:56.373 ], 00:04:56.373 "driver_specific": {} 00:04:56.373 }, 00:04:56.373 { 00:04:56.373 "name": "Passthru0", 00:04:56.373 "aliases": [ 00:04:56.373 "8125eecd-0d3a-503c-875a-d6b235057dbd" 00:04:56.373 ], 00:04:56.373 "product_name": "passthru", 00:04:56.373 "block_size": 512, 00:04:56.373 "num_blocks": 16384, 00:04:56.373 "uuid": "8125eecd-0d3a-503c-875a-d6b235057dbd", 00:04:56.373 "assigned_rate_limits": { 00:04:56.373 "rw_ios_per_sec": 0, 00:04:56.373 "rw_mbytes_per_sec": 0, 00:04:56.373 "r_mbytes_per_sec": 0, 00:04:56.373 "w_mbytes_per_sec": 0 00:04:56.373 }, 00:04:56.373 "claimed": false, 00:04:56.373 "zoned": false, 00:04:56.373 "supported_io_types": { 00:04:56.373 "read": true, 00:04:56.373 "write": true, 00:04:56.373 "unmap": true, 00:04:56.373 "flush": true, 00:04:56.373 "reset": true, 00:04:56.373 "nvme_admin": false, 00:04:56.373 "nvme_io": false, 00:04:56.373 "nvme_io_md": false, 00:04:56.373 "write_zeroes": true, 00:04:56.373 "zcopy": true, 00:04:56.373 "get_zone_info": false, 00:04:56.373 "zone_management": false, 00:04:56.373 "zone_append": false, 00:04:56.373 "compare": false, 00:04:56.373 "compare_and_write": false, 00:04:56.373 "abort": true, 00:04:56.373 "seek_hole": false, 00:04:56.373 "seek_data": false, 00:04:56.373 "copy": true, 00:04:56.373 "nvme_iov_md": false 00:04:56.373 }, 00:04:56.373 "memory_domains": [ 00:04:56.373 { 00:04:56.373 "dma_device_id": "system", 00:04:56.373 "dma_device_type": 1 00:04:56.373 }, 00:04:56.373 { 00:04:56.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.373 "dma_device_type": 2 00:04:56.373 } 00:04:56.373 ], 00:04:56.373 "driver_specific": { 00:04:56.373 "passthru": { 00:04:56.373 "name": "Passthru0", 00:04:56.373 "base_bdev_name": "Malloc2" 00:04:56.373 } 00:04:56.373 } 00:04:56.373 } 00:04:56.373 ]' 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.373 09:39:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.373 09:39:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.373 09:39:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.373 09:39:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:56.373 09:39:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.373 00:04:56.373 real 0m0.279s 00:04:56.373 user 0m0.180s 00:04:56.373 sys 0m0.036s 00:04:56.373 09:39:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.373 09:39:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.373 ************************************ 00:04:56.373 END TEST rpc_daemon_integrity 00:04:56.373 ************************************ 00:04:56.657 09:39:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:56.657 09:39:25 rpc -- rpc/rpc.sh@84 -- # killprocess 1038462 00:04:56.657 09:39:25 rpc -- common/autotest_common.sh@950 -- # '[' -z 1038462 ']' 00:04:56.657 09:39:25 rpc -- common/autotest_common.sh@954 -- # kill -0 1038462 00:04:56.657 09:39:25 rpc -- common/autotest_common.sh@955 -- # uname 00:04:56.657 09:39:25 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.657 09:39:25 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1038462 00:04:56.657 09:39:25 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.657 09:39:25 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.657 09:39:25 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1038462' 00:04:56.657 killing process with pid 1038462 00:04:56.657 09:39:25 rpc -- common/autotest_common.sh@969 -- # kill 1038462 00:04:56.657 09:39:25 rpc -- common/autotest_common.sh@974 -- # wait 1038462 00:04:56.915 00:04:56.915 real 0m2.070s 00:04:56.915 user 0m2.644s 00:04:56.915 sys 0m0.701s 00:04:56.915 09:39:25 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.915 09:39:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.915 ************************************ 00:04:56.915 END TEST rpc 00:04:56.915 ************************************ 00:04:56.915 09:39:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:56.915 09:39:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.915 09:39:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.915 09:39:25 -- common/autotest_common.sh@10 -- # set +x 00:04:56.915 ************************************ 00:04:56.915 START TEST skip_rpc 00:04:56.915 ************************************ 00:04:56.915 09:39:25 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:56.915 * Looking for test storage... 00:04:56.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:56.915 09:39:25 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:56.915 09:39:25 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:56.915 09:39:25 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:57.174 09:39:25 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.174 09:39:25 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.175 09:39:25 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.175 09:39:25 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.175 09:39:25 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.175 09:39:25 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.175 09:39:25 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.175 09:39:25 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.175 09:39:25 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.175 09:39:25 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.175 09:39:25 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.175 09:39:25 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:57.175 09:39:25 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.175 09:39:25 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:57.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.175 --rc genhtml_branch_coverage=1 00:04:57.175 --rc genhtml_function_coverage=1 00:04:57.175 --rc genhtml_legend=1 00:04:57.175 --rc geninfo_all_blocks=1 00:04:57.175 --rc geninfo_unexecuted_blocks=1 00:04:57.175 00:04:57.175 ' 00:04:57.175 09:39:25 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:57.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.175 --rc genhtml_branch_coverage=1 00:04:57.175 --rc genhtml_function_coverage=1 00:04:57.175 --rc genhtml_legend=1 00:04:57.175 --rc geninfo_all_blocks=1 00:04:57.175 --rc geninfo_unexecuted_blocks=1 00:04:57.175 00:04:57.175 ' 00:04:57.175 09:39:25 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:57.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.175 --rc genhtml_branch_coverage=1 00:04:57.175 --rc genhtml_function_coverage=1 00:04:57.175 --rc genhtml_legend=1 00:04:57.175 --rc geninfo_all_blocks=1 00:04:57.175 --rc geninfo_unexecuted_blocks=1 00:04:57.175 00:04:57.175 ' 00:04:57.175 09:39:25 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:57.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.175 --rc genhtml_branch_coverage=1 00:04:57.175 --rc genhtml_function_coverage=1 00:04:57.175 --rc genhtml_legend=1 00:04:57.175 --rc geninfo_all_blocks=1 00:04:57.175 --rc geninfo_unexecuted_blocks=1 00:04:57.175 00:04:57.175 ' 00:04:57.175 09:39:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.175 09:39:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.175 09:39:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:57.175 09:39:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.175 09:39:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.175 09:39:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.175 ************************************ 00:04:57.175 START TEST skip_rpc 00:04:57.175 ************************************ 00:04:57.175 09:39:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:57.175 09:39:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1039106 00:04:57.175 09:39:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.175 09:39:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:57.175 09:39:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:57.175 [2024-12-07 09:39:25.774081] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:04:57.175 [2024-12-07 09:39:25.774121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039106 ] 00:04:57.175 [2024-12-07 09:39:25.828855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.175 [2024-12-07 09:39:25.868601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1039106 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1039106 ']' 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1039106 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1039106 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1039106' 00:05:02.446 killing process with pid 1039106 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1039106 00:05:02.446 09:39:30 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1039106 00:05:02.446 00:05:02.446 real 0m5.382s 00:05:02.446 user 0m5.152s 00:05:02.446 sys 0m0.266s 00:05:02.446 09:39:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.446 09:39:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.446 ************************************ 00:05:02.446 END TEST skip_rpc 00:05:02.446 ************************************ 00:05:02.446 09:39:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:02.446 09:39:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.446 09:39:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.446 09:39:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.706 ************************************ 00:05:02.706 START TEST skip_rpc_with_json 00:05:02.706 ************************************ 00:05:02.706 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:02.706 09:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:02.706 09:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1040050 00:05:02.706 09:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.706 09:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.706 09:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1040050 00:05:02.706 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1040050 ']' 00:05:02.706 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.706 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.706 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.706 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.706 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.706 [2024-12-07 09:39:31.224438] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:02.706 [2024-12-07 09:39:31.224478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040050 ] 00:05:02.706 [2024-12-07 09:39:31.276829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.706 [2024-12-07 09:39:31.316362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.966 [2024-12-07 09:39:31.519878] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:02.966 request: 00:05:02.966 { 00:05:02.966 "trtype": "tcp", 00:05:02.966 "method": "nvmf_get_transports", 00:05:02.966 "req_id": 1 00:05:02.966 } 00:05:02.966 Got JSON-RPC error response 00:05:02.966 response: 00:05:02.966 { 00:05:02.966 "code": -19, 00:05:02.966 "message": "No such device" 00:05:02.966 } 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.966 [2024-12-07 09:39:31.531994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.966 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.226 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.226 09:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.226 { 00:05:03.226 "subsystems": [ 00:05:03.226 { 00:05:03.226 "subsystem": "fsdev", 00:05:03.226 "config": [ 00:05:03.226 { 00:05:03.226 "method": "fsdev_set_opts", 00:05:03.226 "params": { 00:05:03.226 "fsdev_io_pool_size": 65535, 00:05:03.226 "fsdev_io_cache_size": 256 00:05:03.226 } 00:05:03.226 } 00:05:03.226 ] 00:05:03.226 }, 00:05:03.226 { 00:05:03.226 "subsystem": "vfio_user_target", 00:05:03.226 "config": null 00:05:03.226 }, 00:05:03.226 { 00:05:03.226 "subsystem": "keyring", 00:05:03.226 "config": [] 00:05:03.226 }, 00:05:03.226 { 00:05:03.226 "subsystem": "iobuf", 00:05:03.226 "config": [ 00:05:03.226 { 00:05:03.226 "method": "iobuf_set_options", 00:05:03.226 "params": { 00:05:03.226 "small_pool_count": 8192, 00:05:03.226 "large_pool_count": 1024, 00:05:03.226 "small_bufsize": 8192, 00:05:03.226 "large_bufsize": 135168 00:05:03.226 } 00:05:03.226 } 00:05:03.226 ] 00:05:03.226 }, 00:05:03.226 { 00:05:03.226 "subsystem": "sock", 00:05:03.226 "config": [ 00:05:03.226 { 00:05:03.226 "method": "sock_set_default_impl", 00:05:03.226 "params": { 00:05:03.226 "impl_name": "posix" 00:05:03.226 } 00:05:03.226 }, 00:05:03.226 { 00:05:03.226 "method": "sock_impl_set_options", 00:05:03.226 "params": { 00:05:03.226 "impl_name": "ssl", 00:05:03.226 "recv_buf_size": 4096, 00:05:03.226 "send_buf_size": 4096, 00:05:03.226 "enable_recv_pipe": true, 00:05:03.226 "enable_quickack": false, 00:05:03.226 "enable_placement_id": 0, 00:05:03.226 "enable_zerocopy_send_server": true, 00:05:03.226 "enable_zerocopy_send_client": false, 00:05:03.226 "zerocopy_threshold": 0, 00:05:03.226 "tls_version": 0, 00:05:03.226 "enable_ktls": false 00:05:03.226 } 00:05:03.226 }, 00:05:03.226 { 00:05:03.226 "method": "sock_impl_set_options", 00:05:03.226 "params": { 00:05:03.226 "impl_name": "posix", 00:05:03.226 "recv_buf_size": 2097152, 00:05:03.226 "send_buf_size": 2097152, 00:05:03.226 "enable_recv_pipe": true, 00:05:03.226 "enable_quickack": false, 00:05:03.226 "enable_placement_id": 0, 00:05:03.226 "enable_zerocopy_send_server": true, 00:05:03.227 "enable_zerocopy_send_client": false, 00:05:03.227 "zerocopy_threshold": 0, 00:05:03.227 "tls_version": 0, 00:05:03.227 "enable_ktls": false 00:05:03.227 } 00:05:03.227 } 00:05:03.227 ] 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "subsystem": "vmd", 00:05:03.227 "config": [] 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "subsystem": "accel", 00:05:03.227 "config": [ 00:05:03.227 { 00:05:03.227 "method": "accel_set_options", 00:05:03.227 "params": { 00:05:03.227 "small_cache_size": 128, 00:05:03.227 "large_cache_size": 16, 00:05:03.227 "task_count": 2048, 00:05:03.227 "sequence_count": 2048, 00:05:03.227 "buf_count": 2048 00:05:03.227 } 00:05:03.227 } 00:05:03.227 ] 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "subsystem": "bdev", 00:05:03.227 "config": [ 00:05:03.227 { 00:05:03.227 "method": "bdev_set_options", 00:05:03.227 "params": { 00:05:03.227 "bdev_io_pool_size": 65535, 00:05:03.227 "bdev_io_cache_size": 256, 00:05:03.227 "bdev_auto_examine": true, 00:05:03.227 "iobuf_small_cache_size": 128, 00:05:03.227 "iobuf_large_cache_size": 16 00:05:03.227 } 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "method": "bdev_raid_set_options", 00:05:03.227 "params": { 00:05:03.227 "process_window_size_kb": 1024, 00:05:03.227 "process_max_bandwidth_mb_sec": 0 00:05:03.227 } 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "method": "bdev_iscsi_set_options", 00:05:03.227 "params": { 00:05:03.227 "timeout_sec": 30 00:05:03.227 } 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "method": "bdev_nvme_set_options", 00:05:03.227 "params": { 00:05:03.227 "action_on_timeout": "none", 00:05:03.227 "timeout_us": 0, 00:05:03.227 "timeout_admin_us": 0, 00:05:03.227 "keep_alive_timeout_ms": 10000, 00:05:03.227 "arbitration_burst": 0, 00:05:03.227 "low_priority_weight": 0, 00:05:03.227 "medium_priority_weight": 0, 00:05:03.227 "high_priority_weight": 0, 00:05:03.227 "nvme_adminq_poll_period_us": 10000, 00:05:03.227 "nvme_ioq_poll_period_us": 0, 00:05:03.227 "io_queue_requests": 0, 00:05:03.227 "delay_cmd_submit": true, 00:05:03.227 "transport_retry_count": 4, 00:05:03.227 "bdev_retry_count": 3, 00:05:03.227 "transport_ack_timeout": 0, 00:05:03.227 "ctrlr_loss_timeout_sec": 0, 00:05:03.227 "reconnect_delay_sec": 0, 00:05:03.227 "fast_io_fail_timeout_sec": 0, 00:05:03.227 "disable_auto_failback": false, 00:05:03.227 "generate_uuids": false, 00:05:03.227 "transport_tos": 0, 00:05:03.227 "nvme_error_stat": false, 00:05:03.227 "rdma_srq_size": 0, 00:05:03.227 "io_path_stat": false, 00:05:03.227 "allow_accel_sequence": false, 00:05:03.227 "rdma_max_cq_size": 0, 00:05:03.227 "rdma_cm_event_timeout_ms": 0, 00:05:03.227 "dhchap_digests": [ 00:05:03.227 "sha256", 00:05:03.227 "sha384", 00:05:03.227 "sha512" 00:05:03.227 ], 00:05:03.227 "dhchap_dhgroups": [ 00:05:03.227 "null", 00:05:03.227 "ffdhe2048", 00:05:03.227 "ffdhe3072", 00:05:03.227 "ffdhe4096", 00:05:03.227 "ffdhe6144", 00:05:03.227 "ffdhe8192" 00:05:03.227 ] 00:05:03.227 } 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "method": "bdev_nvme_set_hotplug", 00:05:03.227 "params": { 00:05:03.227 "period_us": 100000, 00:05:03.227 "enable": false 00:05:03.227 } 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "method": "bdev_wait_for_examine" 00:05:03.227 } 00:05:03.227 ] 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "subsystem": "scsi", 00:05:03.227 "config": null 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "subsystem": "scheduler", 00:05:03.227 "config": [ 00:05:03.227 { 00:05:03.227 "method": "framework_set_scheduler", 00:05:03.227 "params": { 00:05:03.227 "name": "static" 00:05:03.227 } 00:05:03.227 } 00:05:03.227 ] 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "subsystem": "vhost_scsi", 00:05:03.227 "config": [] 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "subsystem": "vhost_blk", 00:05:03.227 "config": [] 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "subsystem": "ublk", 00:05:03.227 "config": [] 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "subsystem": "nbd", 00:05:03.227 "config": [] 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "subsystem": "nvmf", 00:05:03.227 "config": [ 00:05:03.227 { 00:05:03.227 "method": "nvmf_set_config", 00:05:03.227 "params": { 00:05:03.227 "discovery_filter": "match_any", 00:05:03.227 "admin_cmd_passthru": { 00:05:03.227 "identify_ctrlr": false 00:05:03.227 }, 00:05:03.227 "dhchap_digests": [ 00:05:03.227 "sha256", 00:05:03.227 "sha384", 00:05:03.227 "sha512" 00:05:03.227 ], 00:05:03.227 "dhchap_dhgroups": [ 00:05:03.227 "null", 00:05:03.227 "ffdhe2048", 00:05:03.227 "ffdhe3072", 00:05:03.227 "ffdhe4096", 00:05:03.227 "ffdhe6144", 00:05:03.227 "ffdhe8192" 00:05:03.227 ] 00:05:03.227 } 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "method": "nvmf_set_max_subsystems", 00:05:03.227 "params": { 00:05:03.227 "max_subsystems": 1024 00:05:03.227 } 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "method": "nvmf_set_crdt", 00:05:03.227 "params": { 00:05:03.227 "crdt1": 0, 00:05:03.227 "crdt2": 0, 00:05:03.227 "crdt3": 0 00:05:03.227 } 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "method": "nvmf_create_transport", 00:05:03.227 "params": { 00:05:03.227 "trtype": "TCP", 00:05:03.227 "max_queue_depth": 128, 00:05:03.227 "max_io_qpairs_per_ctrlr": 127, 00:05:03.227 "in_capsule_data_size": 4096, 00:05:03.227 "max_io_size": 131072, 00:05:03.227 "io_unit_size": 131072, 00:05:03.227 "max_aq_depth": 128, 00:05:03.227 "num_shared_buffers": 511, 00:05:03.227 "buf_cache_size": 4294967295, 00:05:03.227 "dif_insert_or_strip": false, 00:05:03.227 "zcopy": false, 00:05:03.227 "c2h_success": true, 00:05:03.227 "sock_priority": 0, 00:05:03.227 "abort_timeout_sec": 1, 00:05:03.227 "ack_timeout": 0, 00:05:03.227 "data_wr_pool_size": 0 00:05:03.227 } 00:05:03.227 } 00:05:03.227 ] 00:05:03.227 }, 00:05:03.227 { 00:05:03.227 "subsystem": "iscsi", 00:05:03.227 "config": [ 00:05:03.227 { 00:05:03.227 "method": "iscsi_set_options", 00:05:03.227 "params": { 00:05:03.227 "node_base": "iqn.2016-06.io.spdk", 00:05:03.227 "max_sessions": 128, 00:05:03.227 "max_connections_per_session": 2, 00:05:03.227 "max_queue_depth": 64, 00:05:03.227 "default_time2wait": 2, 00:05:03.227 "default_time2retain": 20, 00:05:03.227 "first_burst_length": 8192, 00:05:03.227 "immediate_data": true, 00:05:03.227 "allow_duplicated_isid": false, 00:05:03.227 "error_recovery_level": 0, 00:05:03.227 "nop_timeout": 60, 00:05:03.227 "nop_in_interval": 30, 00:05:03.227 "disable_chap": false, 00:05:03.227 "require_chap": false, 00:05:03.227 "mutual_chap": false, 00:05:03.227 "chap_group": 0, 00:05:03.227 "max_large_datain_per_connection": 64, 00:05:03.227 "max_r2t_per_connection": 4, 00:05:03.227 "pdu_pool_size": 36864, 00:05:03.227 "immediate_data_pool_size": 16384, 00:05:03.227 "data_out_pool_size": 2048 00:05:03.227 } 00:05:03.227 } 00:05:03.227 ] 00:05:03.227 } 00:05:03.227 ] 00:05:03.227 } 00:05:03.227 09:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:03.227 09:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1040050 00:05:03.227 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1040050 ']' 00:05:03.227 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1040050 00:05:03.227 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:03.227 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.227 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040050 00:05:03.227 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.227 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.227 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040050' 00:05:03.227 killing process with pid 1040050 00:05:03.227 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1040050 00:05:03.227 09:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1040050 00:05:03.487 09:39:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1040068 00:05:03.487 09:39:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.487 09:39:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1040068 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1040068 ']' 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1040068 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040068 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040068' 00:05:08.757 killing process with pid 1040068 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1040068 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1040068 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:08.757 00:05:08.757 real 0m6.261s 00:05:08.757 user 0m5.960s 00:05:08.757 sys 0m0.577s 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.757 09:39:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.757 ************************************ 00:05:08.757 END TEST skip_rpc_with_json 00:05:08.757 ************************************ 00:05:08.757 09:39:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:08.757 09:39:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.757 09:39:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.757 09:39:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.015 ************************************ 00:05:09.015 START TEST skip_rpc_with_delay 00:05:09.015 ************************************ 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.015 [2024-12-07 09:39:37.553759] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:09.015 [2024-12-07 09:39:37.553819] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:09.015 00:05:09.015 real 0m0.069s 00:05:09.015 user 0m0.044s 00:05:09.015 sys 0m0.024s 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.015 09:39:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:09.015 ************************************ 00:05:09.015 END TEST skip_rpc_with_delay 00:05:09.015 ************************************ 00:05:09.015 09:39:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:09.015 09:39:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:09.015 09:39:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:09.015 09:39:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.015 09:39:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.015 09:39:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.015 ************************************ 00:05:09.015 START TEST exit_on_failed_rpc_init 00:05:09.015 ************************************ 00:05:09.015 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:09.015 09:39:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.015 09:39:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1041041 00:05:09.015 09:39:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1041041 00:05:09.015 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1041041 ']' 00:05:09.015 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.015 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.015 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.015 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.015 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.015 [2024-12-07 09:39:37.674106] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:09.015 [2024-12-07 09:39:37.674146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041041 ] 00:05:09.015 [2024-12-07 09:39:37.728701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.272 [2024-12-07 09:39:37.770698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.272 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.272 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:09.272 09:39:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.272 09:39:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.272 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:09.272 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.272 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.272 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.273 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.273 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.273 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.273 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.273 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.273 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:09.273 09:39:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.530 [2024-12-07 09:39:38.008655] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:09.530 [2024-12-07 09:39:38.008704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041239 ] 00:05:09.530 [2024-12-07 09:39:38.057048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.530 [2024-12-07 09:39:38.097688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.530 [2024-12-07 09:39:38.097753] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:09.530 [2024-12-07 09:39:38.097762] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:09.530 [2024-12-07 09:39:38.097768] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1041041 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1041041 ']' 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1041041 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1041041 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1041041' 00:05:09.530 killing process with pid 1041041 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1041041 00:05:09.530 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1041041 00:05:10.096 00:05:10.096 real 0m0.880s 00:05:10.096 user 0m0.935s 00:05:10.096 sys 0m0.366s 00:05:10.096 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.096 09:39:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.096 ************************************ 00:05:10.096 END TEST exit_on_failed_rpc_init 00:05:10.096 ************************************ 00:05:10.096 09:39:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:10.096 00:05:10.096 real 0m13.027s 00:05:10.096 user 0m12.302s 00:05:10.096 sys 0m1.487s 00:05:10.096 09:39:38 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.096 09:39:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.096 ************************************ 00:05:10.096 END TEST skip_rpc 00:05:10.096 ************************************ 00:05:10.096 09:39:38 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:10.096 09:39:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.096 09:39:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.096 09:39:38 -- common/autotest_common.sh@10 -- # set +x 00:05:10.096 ************************************ 00:05:10.096 START TEST rpc_client 00:05:10.096 ************************************ 00:05:10.096 09:39:38 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:10.096 * Looking for test storage... 00:05:10.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:10.096 09:39:38 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:10.096 09:39:38 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:10.096 09:39:38 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:10.096 09:39:38 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:10.096 09:39:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.096 09:39:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.096 09:39:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.096 09:39:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.096 09:39:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.096 09:39:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.096 09:39:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.096 09:39:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.096 09:39:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.096 09:39:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.096 09:39:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.097 09:39:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:10.097 09:39:38 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.097 09:39:38 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:10.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.097 --rc genhtml_branch_coverage=1 00:05:10.097 --rc genhtml_function_coverage=1 00:05:10.097 --rc genhtml_legend=1 00:05:10.097 --rc geninfo_all_blocks=1 00:05:10.097 --rc geninfo_unexecuted_blocks=1 00:05:10.097 00:05:10.097 ' 00:05:10.097 09:39:38 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:10.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.097 --rc genhtml_branch_coverage=1 00:05:10.097 --rc genhtml_function_coverage=1 00:05:10.097 --rc genhtml_legend=1 00:05:10.097 --rc geninfo_all_blocks=1 00:05:10.097 --rc geninfo_unexecuted_blocks=1 00:05:10.097 00:05:10.097 ' 00:05:10.097 09:39:38 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:10.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.097 --rc genhtml_branch_coverage=1 00:05:10.097 --rc genhtml_function_coverage=1 00:05:10.097 --rc genhtml_legend=1 00:05:10.097 --rc geninfo_all_blocks=1 00:05:10.097 --rc geninfo_unexecuted_blocks=1 00:05:10.097 00:05:10.097 ' 00:05:10.097 09:39:38 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:10.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.097 --rc genhtml_branch_coverage=1 00:05:10.097 --rc genhtml_function_coverage=1 00:05:10.097 --rc genhtml_legend=1 00:05:10.097 --rc geninfo_all_blocks=1 00:05:10.097 --rc geninfo_unexecuted_blocks=1 00:05:10.097 00:05:10.097 ' 00:05:10.097 09:39:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:10.097 OK 00:05:10.097 09:39:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:10.097 00:05:10.097 real 0m0.181s 00:05:10.097 user 0m0.114s 00:05:10.097 sys 0m0.078s 00:05:10.097 09:39:38 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.097 09:39:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:10.097 ************************************ 00:05:10.097 END TEST rpc_client 00:05:10.097 ************************************ 00:05:10.356 09:39:38 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:10.356 09:39:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.356 09:39:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.356 09:39:38 -- common/autotest_common.sh@10 -- # set +x 00:05:10.356 ************************************ 00:05:10.356 START TEST json_config 00:05:10.356 ************************************ 00:05:10.356 09:39:38 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:10.356 09:39:38 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:10.356 09:39:38 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:10.356 09:39:38 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:10.356 09:39:38 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:10.356 09:39:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.356 09:39:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.356 09:39:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.356 09:39:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.356 09:39:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.356 09:39:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.356 09:39:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.356 09:39:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.356 09:39:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.356 09:39:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.356 09:39:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.356 09:39:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:10.356 09:39:38 json_config -- scripts/common.sh@345 -- # : 1 00:05:10.356 09:39:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.356 09:39:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.356 09:39:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:10.356 09:39:39 json_config -- scripts/common.sh@353 -- # local d=1 00:05:10.356 09:39:39 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.356 09:39:39 json_config -- scripts/common.sh@355 -- # echo 1 00:05:10.356 09:39:39 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.356 09:39:39 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:10.356 09:39:39 json_config -- scripts/common.sh@353 -- # local d=2 00:05:10.356 09:39:39 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.356 09:39:39 json_config -- scripts/common.sh@355 -- # echo 2 00:05:10.356 09:39:39 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.356 09:39:39 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.356 09:39:39 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.356 09:39:39 json_config -- scripts/common.sh@368 -- # return 0 00:05:10.356 09:39:39 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.356 09:39:39 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:10.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.356 --rc genhtml_branch_coverage=1 00:05:10.356 --rc genhtml_function_coverage=1 00:05:10.356 --rc genhtml_legend=1 00:05:10.356 --rc geninfo_all_blocks=1 00:05:10.356 --rc geninfo_unexecuted_blocks=1 00:05:10.356 00:05:10.356 ' 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:10.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.357 --rc genhtml_branch_coverage=1 00:05:10.357 --rc genhtml_function_coverage=1 00:05:10.357 --rc genhtml_legend=1 00:05:10.357 --rc geninfo_all_blocks=1 00:05:10.357 --rc geninfo_unexecuted_blocks=1 00:05:10.357 00:05:10.357 ' 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:10.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.357 --rc genhtml_branch_coverage=1 00:05:10.357 --rc genhtml_function_coverage=1 00:05:10.357 --rc genhtml_legend=1 00:05:10.357 --rc geninfo_all_blocks=1 00:05:10.357 --rc geninfo_unexecuted_blocks=1 00:05:10.357 00:05:10.357 ' 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:10.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.357 --rc genhtml_branch_coverage=1 00:05:10.357 --rc genhtml_function_coverage=1 00:05:10.357 --rc genhtml_legend=1 00:05:10.357 --rc geninfo_all_blocks=1 00:05:10.357 --rc geninfo_unexecuted_blocks=1 00:05:10.357 00:05:10.357 ' 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.357 09:39:39 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.357 09:39:39 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.357 09:39:39 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.357 09:39:39 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.357 09:39:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.357 09:39:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.357 09:39:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.357 09:39:39 json_config -- paths/export.sh@5 -- # export PATH 00:05:10.357 09:39:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@51 -- # : 0 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.357 09:39:39 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:10.357 INFO: JSON configuration test init 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.357 09:39:39 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:10.357 09:39:39 json_config -- json_config/common.sh@9 -- # local app=target 00:05:10.357 09:39:39 json_config -- json_config/common.sh@10 -- # shift 00:05:10.357 09:39:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.357 09:39:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.357 09:39:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.357 09:39:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.357 09:39:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.357 09:39:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1041423 00:05:10.357 09:39:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.357 Waiting for target to run... 00:05:10.357 09:39:39 json_config -- json_config/common.sh@25 -- # waitforlisten 1041423 /var/tmp/spdk_tgt.sock 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@831 -- # '[' -z 1041423 ']' 00:05:10.357 09:39:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.357 09:39:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.617 [2024-12-07 09:39:39.110785] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:10.617 [2024-12-07 09:39:39.110837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041423 ] 00:05:10.875 [2024-12-07 09:39:39.546540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.875 [2024-12-07 09:39:39.579883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.442 09:39:39 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.442 09:39:39 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:11.442 09:39:39 json_config -- json_config/common.sh@26 -- # echo '' 00:05:11.442 00:05:11.442 09:39:39 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:11.442 09:39:39 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:11.442 09:39:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.442 09:39:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.442 09:39:39 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:11.442 09:39:39 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:11.442 09:39:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.442 09:39:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.442 09:39:39 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:11.442 09:39:39 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:11.442 09:39:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:14.731 09:39:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.731 09:39:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:14.731 09:39:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@54 -- # sort 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:14.731 09:39:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:14.731 09:39:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:14.731 09:39:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.731 09:39:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:14.731 09:39:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:14.731 09:39:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:14.990 MallocForNvmf0 00:05:14.990 09:39:43 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:14.990 09:39:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:14.990 MallocForNvmf1 00:05:14.990 09:39:43 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:14.990 09:39:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:15.249 [2024-12-07 09:39:43.846596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.249 09:39:43 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:15.249 09:39:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:15.507 09:39:44 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.507 09:39:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.507 09:39:44 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:15.508 09:39:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:15.766 09:39:44 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:15.766 09:39:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.025 [2024-12-07 09:39:44.588941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:16.025 09:39:44 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:16.025 09:39:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.025 09:39:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.025 09:39:44 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:16.025 09:39:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.025 09:39:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.025 09:39:44 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:16.025 09:39:44 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:16.025 09:39:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:16.283 MallocBdevForConfigChangeCheck 00:05:16.283 09:39:44 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:16.283 09:39:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.283 09:39:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.283 09:39:44 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:16.284 09:39:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.548 09:39:45 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:16.548 INFO: shutting down applications... 00:05:16.548 09:39:45 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:16.548 09:39:45 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:16.548 09:39:45 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:16.548 09:39:45 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:18.452 Calling clear_iscsi_subsystem 00:05:18.452 Calling clear_nvmf_subsystem 00:05:18.452 Calling clear_nbd_subsystem 00:05:18.452 Calling clear_ublk_subsystem 00:05:18.452 Calling clear_vhost_blk_subsystem 00:05:18.452 Calling clear_vhost_scsi_subsystem 00:05:18.452 Calling clear_bdev_subsystem 00:05:18.452 09:39:46 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:18.452 09:39:46 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:18.452 09:39:46 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:18.452 09:39:46 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.452 09:39:46 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:18.452 09:39:46 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:18.452 09:39:47 json_config -- json_config/json_config.sh@352 -- # break 00:05:18.452 09:39:47 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:18.452 09:39:47 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:18.452 09:39:47 json_config -- json_config/common.sh@31 -- # local app=target 00:05:18.452 09:39:47 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:18.452 09:39:47 json_config -- json_config/common.sh@35 -- # [[ -n 1041423 ]] 00:05:18.452 09:39:47 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1041423 00:05:18.452 09:39:47 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:18.452 09:39:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.452 09:39:47 json_config -- json_config/common.sh@41 -- # kill -0 1041423 00:05:18.452 09:39:47 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.023 09:39:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.023 09:39:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.023 09:39:47 json_config -- json_config/common.sh@41 -- # kill -0 1041423 00:05:19.023 09:39:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:19.023 09:39:47 json_config -- json_config/common.sh@43 -- # break 00:05:19.023 09:39:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:19.023 09:39:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:19.023 SPDK target shutdown done 00:05:19.023 09:39:47 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:19.023 INFO: relaunching applications... 00:05:19.023 09:39:47 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.023 09:39:47 json_config -- json_config/common.sh@9 -- # local app=target 00:05:19.023 09:39:47 json_config -- json_config/common.sh@10 -- # shift 00:05:19.023 09:39:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.023 09:39:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.023 09:39:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.023 09:39:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.023 09:39:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.023 09:39:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1043134 00:05:19.023 09:39:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.023 Waiting for target to run... 00:05:19.023 09:39:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.023 09:39:47 json_config -- json_config/common.sh@25 -- # waitforlisten 1043134 /var/tmp/spdk_tgt.sock 00:05:19.023 09:39:47 json_config -- common/autotest_common.sh@831 -- # '[' -z 1043134 ']' 00:05:19.023 09:39:47 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.023 09:39:47 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.023 09:39:47 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.023 09:39:47 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.023 09:39:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.023 [2024-12-07 09:39:47.709629] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:19.023 [2024-12-07 09:39:47.709688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043134 ] 00:05:19.590 [2024-12-07 09:39:48.146195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.590 [2024-12-07 09:39:48.179123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.879 [2024-12-07 09:39:51.184829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.879 [2024-12-07 09:39:51.217201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:23.447 09:39:51 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.447 09:39:51 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:23.447 09:39:51 json_config -- json_config/common.sh@26 -- # echo '' 00:05:23.447 00:05:23.447 09:39:51 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:23.447 09:39:51 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:23.447 INFO: Checking if target configuration is the same... 00:05:23.447 09:39:51 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.447 09:39:51 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:23.447 09:39:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.447 + '[' 2 -ne 2 ']' 00:05:23.447 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:23.447 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:23.447 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:23.447 +++ basename /dev/fd/62 00:05:23.447 ++ mktemp /tmp/62.XXX 00:05:23.447 + tmp_file_1=/tmp/62.EhK 00:05:23.447 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.447 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.447 + tmp_file_2=/tmp/spdk_tgt_config.json.CqQ 00:05:23.447 + ret=0 00:05:23.447 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.706 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.706 + diff -u /tmp/62.EhK /tmp/spdk_tgt_config.json.CqQ 00:05:23.706 + echo 'INFO: JSON config files are the same' 00:05:23.706 INFO: JSON config files are the same 00:05:23.706 + rm /tmp/62.EhK /tmp/spdk_tgt_config.json.CqQ 00:05:23.706 + exit 0 00:05:23.706 09:39:52 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:23.706 09:39:52 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:23.706 INFO: changing configuration and checking if this can be detected... 00:05:23.706 09:39:52 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.706 09:39:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.965 09:39:52 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:23.965 09:39:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.965 09:39:52 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.965 + '[' 2 -ne 2 ']' 00:05:23.965 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:23.965 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:23.965 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:23.965 +++ basename /dev/fd/62 00:05:23.965 ++ mktemp /tmp/62.XXX 00:05:23.965 + tmp_file_1=/tmp/62.CbP 00:05:23.965 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.965 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.965 + tmp_file_2=/tmp/spdk_tgt_config.json.lOh 00:05:23.965 + ret=0 00:05:23.965 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.224 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.224 + diff -u /tmp/62.CbP /tmp/spdk_tgt_config.json.lOh 00:05:24.224 + ret=1 00:05:24.224 + echo '=== Start of file: /tmp/62.CbP ===' 00:05:24.224 + cat /tmp/62.CbP 00:05:24.224 + echo '=== End of file: /tmp/62.CbP ===' 00:05:24.224 + echo '' 00:05:24.224 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lOh ===' 00:05:24.224 + cat /tmp/spdk_tgt_config.json.lOh 00:05:24.224 + echo '=== End of file: /tmp/spdk_tgt_config.json.lOh ===' 00:05:24.224 + echo '' 00:05:24.224 + rm /tmp/62.CbP /tmp/spdk_tgt_config.json.lOh 00:05:24.224 + exit 1 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:24.224 INFO: configuration change detected. 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:24.224 09:39:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.224 09:39:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@324 -- # [[ -n 1043134 ]] 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:24.224 09:39:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.224 09:39:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:24.224 09:39:52 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:24.224 09:39:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.224 09:39:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.483 09:39:52 json_config -- json_config/json_config.sh@330 -- # killprocess 1043134 00:05:24.483 09:39:52 json_config -- common/autotest_common.sh@950 -- # '[' -z 1043134 ']' 00:05:24.483 09:39:52 json_config -- common/autotest_common.sh@954 -- # kill -0 1043134 00:05:24.483 09:39:52 json_config -- common/autotest_common.sh@955 -- # uname 00:05:24.483 09:39:52 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.483 09:39:52 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1043134 00:05:24.483 09:39:53 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.483 09:39:53 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.483 09:39:53 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1043134' 00:05:24.483 killing process with pid 1043134 00:05:24.483 09:39:53 json_config -- common/autotest_common.sh@969 -- # kill 1043134 00:05:24.483 09:39:53 json_config -- common/autotest_common.sh@974 -- # wait 1043134 00:05:25.860 09:39:54 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.860 09:39:54 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:25.860 09:39:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.860 09:39:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.860 09:39:54 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:25.860 09:39:54 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:25.860 INFO: Success 00:05:25.860 00:05:25.860 real 0m15.683s 00:05:25.860 user 0m16.631s 00:05:25.860 sys 0m2.109s 00:05:25.860 09:39:54 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.860 09:39:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.860 ************************************ 00:05:25.860 END TEST json_config 00:05:25.860 ************************************ 00:05:25.860 09:39:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:25.860 09:39:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.860 09:39:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.860 09:39:54 -- common/autotest_common.sh@10 -- # set +x 00:05:26.119 ************************************ 00:05:26.119 START TEST json_config_extra_key 00:05:26.119 ************************************ 00:05:26.119 09:39:54 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:26.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.120 --rc genhtml_branch_coverage=1 00:05:26.120 --rc genhtml_function_coverage=1 00:05:26.120 --rc genhtml_legend=1 00:05:26.120 --rc geninfo_all_blocks=1 00:05:26.120 --rc geninfo_unexecuted_blocks=1 00:05:26.120 00:05:26.120 ' 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:26.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.120 --rc genhtml_branch_coverage=1 00:05:26.120 --rc genhtml_function_coverage=1 00:05:26.120 --rc genhtml_legend=1 00:05:26.120 --rc geninfo_all_blocks=1 00:05:26.120 --rc geninfo_unexecuted_blocks=1 00:05:26.120 00:05:26.120 ' 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:26.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.120 --rc genhtml_branch_coverage=1 00:05:26.120 --rc genhtml_function_coverage=1 00:05:26.120 --rc genhtml_legend=1 00:05:26.120 --rc geninfo_all_blocks=1 00:05:26.120 --rc geninfo_unexecuted_blocks=1 00:05:26.120 00:05:26.120 ' 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:26.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.120 --rc genhtml_branch_coverage=1 00:05:26.120 --rc genhtml_function_coverage=1 00:05:26.120 --rc genhtml_legend=1 00:05:26.120 --rc geninfo_all_blocks=1 00:05:26.120 --rc geninfo_unexecuted_blocks=1 00:05:26.120 00:05:26.120 ' 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.120 09:39:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.120 09:39:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.120 09:39:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.120 09:39:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.120 09:39:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:26.120 09:39:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:26.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:26.120 09:39:54 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:26.120 INFO: launching applications... 00:05:26.120 09:39:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.120 09:39:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:26.120 09:39:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:26.120 09:39:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.120 09:39:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.120 09:39:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.120 09:39:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.120 09:39:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.120 09:39:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1044414 00:05:26.120 09:39:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.120 Waiting for target to run... 00:05:26.120 09:39:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1044414 /var/tmp/spdk_tgt.sock 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1044414 ']' 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.120 09:39:54 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.120 09:39:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.379 [2024-12-07 09:39:54.848618] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:26.379 [2024-12-07 09:39:54.848670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044414 ] 00:05:26.637 [2024-12-07 09:39:55.116154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.637 [2024-12-07 09:39:55.140698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.225 09:39:55 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.225 09:39:55 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:27.225 09:39:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:27.225 00:05:27.225 09:39:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:27.225 INFO: shutting down applications... 00:05:27.225 09:39:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:27.225 09:39:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:27.225 09:39:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.225 09:39:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1044414 ]] 00:05:27.225 09:39:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1044414 00:05:27.225 09:39:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.225 09:39:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.225 09:39:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1044414 00:05:27.225 09:39:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.483 09:39:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.483 09:39:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.483 09:39:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1044414 00:05:27.483 09:39:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:27.483 09:39:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:27.483 09:39:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:27.483 09:39:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:27.483 SPDK target shutdown done 00:05:27.483 09:39:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:27.483 Success 00:05:27.483 00:05:27.483 real 0m1.571s 00:05:27.483 user 0m1.375s 00:05:27.483 sys 0m0.391s 00:05:27.483 09:39:56 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.483 09:39:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.483 ************************************ 00:05:27.483 END TEST json_config_extra_key 00:05:27.483 ************************************ 00:05:27.741 09:39:56 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.741 09:39:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.741 09:39:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.741 09:39:56 -- common/autotest_common.sh@10 -- # set +x 00:05:27.741 ************************************ 00:05:27.741 START TEST alias_rpc 00:05:27.741 ************************************ 00:05:27.741 09:39:56 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.741 * Looking for test storage... 00:05:27.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:27.741 09:39:56 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.741 09:39:56 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.741 09:39:56 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.741 09:39:56 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.741 09:39:56 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.741 09:39:56 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.741 09:39:56 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.741 09:39:56 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.741 09:39:56 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.742 09:39:56 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:27.742 09:39:56 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.742 09:39:56 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.742 --rc genhtml_branch_coverage=1 00:05:27.742 --rc genhtml_function_coverage=1 00:05:27.742 --rc genhtml_legend=1 00:05:27.742 --rc geninfo_all_blocks=1 00:05:27.742 --rc geninfo_unexecuted_blocks=1 00:05:27.742 00:05:27.742 ' 00:05:27.742 09:39:56 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.742 --rc genhtml_branch_coverage=1 00:05:27.742 --rc genhtml_function_coverage=1 00:05:27.742 --rc genhtml_legend=1 00:05:27.742 --rc geninfo_all_blocks=1 00:05:27.742 --rc geninfo_unexecuted_blocks=1 00:05:27.742 00:05:27.742 ' 00:05:27.742 09:39:56 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.742 --rc genhtml_branch_coverage=1 00:05:27.742 --rc genhtml_function_coverage=1 00:05:27.742 --rc genhtml_legend=1 00:05:27.742 --rc geninfo_all_blocks=1 00:05:27.742 --rc geninfo_unexecuted_blocks=1 00:05:27.742 00:05:27.742 ' 00:05:27.742 09:39:56 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.742 --rc genhtml_branch_coverage=1 00:05:27.742 --rc genhtml_function_coverage=1 00:05:27.742 --rc genhtml_legend=1 00:05:27.742 --rc geninfo_all_blocks=1 00:05:27.742 --rc geninfo_unexecuted_blocks=1 00:05:27.742 00:05:27.742 ' 00:05:27.742 09:39:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.742 09:39:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1044709 00:05:27.742 09:39:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1044709 00:05:27.742 09:39:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.742 09:39:56 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1044709 ']' 00:05:27.742 09:39:56 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.742 09:39:56 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.742 09:39:56 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.742 09:39:56 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.742 09:39:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.000 [2024-12-07 09:39:56.480368] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:28.000 [2024-12-07 09:39:56.480418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044709 ] 00:05:28.000 [2024-12-07 09:39:56.532762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.000 [2024-12-07 09:39:56.574266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.258 09:39:56 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.258 09:39:56 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:28.258 09:39:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:28.516 09:39:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1044709 00:05:28.516 09:39:56 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1044709 ']' 00:05:28.516 09:39:56 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1044709 00:05:28.516 09:39:56 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:28.516 09:39:56 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.516 09:39:56 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1044709 00:05:28.516 09:39:57 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.516 09:39:57 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.516 09:39:57 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1044709' 00:05:28.516 killing process with pid 1044709 00:05:28.516 09:39:57 alias_rpc -- common/autotest_common.sh@969 -- # kill 1044709 00:05:28.516 09:39:57 alias_rpc -- common/autotest_common.sh@974 -- # wait 1044709 00:05:28.774 00:05:28.774 real 0m1.100s 00:05:28.774 user 0m1.137s 00:05:28.774 sys 0m0.396s 00:05:28.774 09:39:57 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.774 09:39:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.774 ************************************ 00:05:28.774 END TEST alias_rpc 00:05:28.774 ************************************ 00:05:28.774 09:39:57 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:28.774 09:39:57 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:28.774 09:39:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.774 09:39:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.774 09:39:57 -- common/autotest_common.sh@10 -- # set +x 00:05:28.774 ************************************ 00:05:28.774 START TEST spdkcli_tcp 00:05:28.774 ************************************ 00:05:28.774 09:39:57 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.032 * Looking for test storage... 00:05:29.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.032 09:39:57 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:29.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.032 --rc genhtml_branch_coverage=1 00:05:29.032 --rc genhtml_function_coverage=1 00:05:29.032 --rc genhtml_legend=1 00:05:29.032 --rc geninfo_all_blocks=1 00:05:29.032 --rc geninfo_unexecuted_blocks=1 00:05:29.032 00:05:29.032 ' 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:29.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.032 --rc genhtml_branch_coverage=1 00:05:29.032 --rc genhtml_function_coverage=1 00:05:29.032 --rc genhtml_legend=1 00:05:29.032 --rc geninfo_all_blocks=1 00:05:29.032 --rc geninfo_unexecuted_blocks=1 00:05:29.032 00:05:29.032 ' 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:29.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.032 --rc genhtml_branch_coverage=1 00:05:29.032 --rc genhtml_function_coverage=1 00:05:29.032 --rc genhtml_legend=1 00:05:29.032 --rc geninfo_all_blocks=1 00:05:29.032 --rc geninfo_unexecuted_blocks=1 00:05:29.032 00:05:29.032 ' 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:29.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.032 --rc genhtml_branch_coverage=1 00:05:29.032 --rc genhtml_function_coverage=1 00:05:29.032 --rc genhtml_legend=1 00:05:29.032 --rc geninfo_all_blocks=1 00:05:29.032 --rc geninfo_unexecuted_blocks=1 00:05:29.032 00:05:29.032 ' 00:05:29.032 09:39:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:29.032 09:39:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:29.032 09:39:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:29.032 09:39:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:29.032 09:39:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:29.032 09:39:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:29.032 09:39:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.032 09:39:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1044996 00:05:29.032 09:39:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:29.032 09:39:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1044996 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1044996 ']' 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.032 09:39:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.032 [2024-12-07 09:39:57.653126] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:29.032 [2024-12-07 09:39:57.653173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044996 ] 00:05:29.032 [2024-12-07 09:39:57.707367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.032 [2024-12-07 09:39:57.748106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.032 [2024-12-07 09:39:57.748108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.291 09:39:57 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.291 09:39:57 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:29.291 09:39:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1045000 00:05:29.291 09:39:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:29.291 09:39:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:29.549 [ 00:05:29.549 "bdev_malloc_delete", 00:05:29.549 "bdev_malloc_create", 00:05:29.549 "bdev_null_resize", 00:05:29.549 "bdev_null_delete", 00:05:29.549 "bdev_null_create", 00:05:29.549 "bdev_nvme_cuse_unregister", 00:05:29.549 "bdev_nvme_cuse_register", 00:05:29.549 "bdev_opal_new_user", 00:05:29.549 "bdev_opal_set_lock_state", 00:05:29.549 "bdev_opal_delete", 00:05:29.549 "bdev_opal_get_info", 00:05:29.549 "bdev_opal_create", 00:05:29.549 "bdev_nvme_opal_revert", 00:05:29.549 "bdev_nvme_opal_init", 00:05:29.549 "bdev_nvme_send_cmd", 00:05:29.549 "bdev_nvme_set_keys", 00:05:29.549 "bdev_nvme_get_path_iostat", 00:05:29.549 "bdev_nvme_get_mdns_discovery_info", 00:05:29.549 "bdev_nvme_stop_mdns_discovery", 00:05:29.549 "bdev_nvme_start_mdns_discovery", 00:05:29.549 "bdev_nvme_set_multipath_policy", 00:05:29.549 "bdev_nvme_set_preferred_path", 00:05:29.549 "bdev_nvme_get_io_paths", 00:05:29.549 "bdev_nvme_remove_error_injection", 00:05:29.550 "bdev_nvme_add_error_injection", 00:05:29.550 "bdev_nvme_get_discovery_info", 00:05:29.550 "bdev_nvme_stop_discovery", 00:05:29.550 "bdev_nvme_start_discovery", 00:05:29.550 "bdev_nvme_get_controller_health_info", 00:05:29.550 "bdev_nvme_disable_controller", 00:05:29.550 "bdev_nvme_enable_controller", 00:05:29.550 "bdev_nvme_reset_controller", 00:05:29.550 "bdev_nvme_get_transport_statistics", 00:05:29.550 "bdev_nvme_apply_firmware", 00:05:29.550 "bdev_nvme_detach_controller", 00:05:29.550 "bdev_nvme_get_controllers", 00:05:29.550 "bdev_nvme_attach_controller", 00:05:29.550 "bdev_nvme_set_hotplug", 00:05:29.550 "bdev_nvme_set_options", 00:05:29.550 "bdev_passthru_delete", 00:05:29.550 "bdev_passthru_create", 00:05:29.550 "bdev_lvol_set_parent_bdev", 00:05:29.550 "bdev_lvol_set_parent", 00:05:29.550 "bdev_lvol_check_shallow_copy", 00:05:29.550 "bdev_lvol_start_shallow_copy", 00:05:29.550 "bdev_lvol_grow_lvstore", 00:05:29.550 "bdev_lvol_get_lvols", 00:05:29.550 "bdev_lvol_get_lvstores", 00:05:29.550 "bdev_lvol_delete", 00:05:29.550 "bdev_lvol_set_read_only", 00:05:29.550 "bdev_lvol_resize", 00:05:29.550 "bdev_lvol_decouple_parent", 00:05:29.550 "bdev_lvol_inflate", 00:05:29.550 "bdev_lvol_rename", 00:05:29.550 "bdev_lvol_clone_bdev", 00:05:29.550 "bdev_lvol_clone", 00:05:29.550 "bdev_lvol_snapshot", 00:05:29.550 "bdev_lvol_create", 00:05:29.550 "bdev_lvol_delete_lvstore", 00:05:29.550 "bdev_lvol_rename_lvstore", 00:05:29.550 "bdev_lvol_create_lvstore", 00:05:29.550 "bdev_raid_set_options", 00:05:29.550 "bdev_raid_remove_base_bdev", 00:05:29.550 "bdev_raid_add_base_bdev", 00:05:29.550 "bdev_raid_delete", 00:05:29.550 "bdev_raid_create", 00:05:29.550 "bdev_raid_get_bdevs", 00:05:29.550 "bdev_error_inject_error", 00:05:29.550 "bdev_error_delete", 00:05:29.550 "bdev_error_create", 00:05:29.550 "bdev_split_delete", 00:05:29.550 "bdev_split_create", 00:05:29.550 "bdev_delay_delete", 00:05:29.550 "bdev_delay_create", 00:05:29.550 "bdev_delay_update_latency", 00:05:29.550 "bdev_zone_block_delete", 00:05:29.550 "bdev_zone_block_create", 00:05:29.550 "blobfs_create", 00:05:29.550 "blobfs_detect", 00:05:29.550 "blobfs_set_cache_size", 00:05:29.550 "bdev_aio_delete", 00:05:29.550 "bdev_aio_rescan", 00:05:29.550 "bdev_aio_create", 00:05:29.550 "bdev_ftl_set_property", 00:05:29.550 "bdev_ftl_get_properties", 00:05:29.550 "bdev_ftl_get_stats", 00:05:29.550 "bdev_ftl_unmap", 00:05:29.550 "bdev_ftl_unload", 00:05:29.550 "bdev_ftl_delete", 00:05:29.550 "bdev_ftl_load", 00:05:29.550 "bdev_ftl_create", 00:05:29.550 "bdev_virtio_attach_controller", 00:05:29.550 "bdev_virtio_scsi_get_devices", 00:05:29.550 "bdev_virtio_detach_controller", 00:05:29.550 "bdev_virtio_blk_set_hotplug", 00:05:29.550 "bdev_iscsi_delete", 00:05:29.550 "bdev_iscsi_create", 00:05:29.550 "bdev_iscsi_set_options", 00:05:29.550 "accel_error_inject_error", 00:05:29.550 "ioat_scan_accel_module", 00:05:29.550 "dsa_scan_accel_module", 00:05:29.550 "iaa_scan_accel_module", 00:05:29.550 "vfu_virtio_create_fs_endpoint", 00:05:29.550 "vfu_virtio_create_scsi_endpoint", 00:05:29.550 "vfu_virtio_scsi_remove_target", 00:05:29.550 "vfu_virtio_scsi_add_target", 00:05:29.550 "vfu_virtio_create_blk_endpoint", 00:05:29.550 "vfu_virtio_delete_endpoint", 00:05:29.550 "keyring_file_remove_key", 00:05:29.550 "keyring_file_add_key", 00:05:29.550 "keyring_linux_set_options", 00:05:29.550 "fsdev_aio_delete", 00:05:29.550 "fsdev_aio_create", 00:05:29.550 "iscsi_get_histogram", 00:05:29.550 "iscsi_enable_histogram", 00:05:29.550 "iscsi_set_options", 00:05:29.550 "iscsi_get_auth_groups", 00:05:29.550 "iscsi_auth_group_remove_secret", 00:05:29.550 "iscsi_auth_group_add_secret", 00:05:29.550 "iscsi_delete_auth_group", 00:05:29.550 "iscsi_create_auth_group", 00:05:29.550 "iscsi_set_discovery_auth", 00:05:29.550 "iscsi_get_options", 00:05:29.550 "iscsi_target_node_request_logout", 00:05:29.550 "iscsi_target_node_set_redirect", 00:05:29.550 "iscsi_target_node_set_auth", 00:05:29.550 "iscsi_target_node_add_lun", 00:05:29.550 "iscsi_get_stats", 00:05:29.550 "iscsi_get_connections", 00:05:29.550 "iscsi_portal_group_set_auth", 00:05:29.550 "iscsi_start_portal_group", 00:05:29.550 "iscsi_delete_portal_group", 00:05:29.550 "iscsi_create_portal_group", 00:05:29.550 "iscsi_get_portal_groups", 00:05:29.550 "iscsi_delete_target_node", 00:05:29.550 "iscsi_target_node_remove_pg_ig_maps", 00:05:29.550 "iscsi_target_node_add_pg_ig_maps", 00:05:29.550 "iscsi_create_target_node", 00:05:29.550 "iscsi_get_target_nodes", 00:05:29.550 "iscsi_delete_initiator_group", 00:05:29.550 "iscsi_initiator_group_remove_initiators", 00:05:29.550 "iscsi_initiator_group_add_initiators", 00:05:29.550 "iscsi_create_initiator_group", 00:05:29.550 "iscsi_get_initiator_groups", 00:05:29.550 "nvmf_set_crdt", 00:05:29.550 "nvmf_set_config", 00:05:29.550 "nvmf_set_max_subsystems", 00:05:29.550 "nvmf_stop_mdns_prr", 00:05:29.550 "nvmf_publish_mdns_prr", 00:05:29.550 "nvmf_subsystem_get_listeners", 00:05:29.550 "nvmf_subsystem_get_qpairs", 00:05:29.550 "nvmf_subsystem_get_controllers", 00:05:29.550 "nvmf_get_stats", 00:05:29.550 "nvmf_get_transports", 00:05:29.550 "nvmf_create_transport", 00:05:29.550 "nvmf_get_targets", 00:05:29.550 "nvmf_delete_target", 00:05:29.550 "nvmf_create_target", 00:05:29.550 "nvmf_subsystem_allow_any_host", 00:05:29.550 "nvmf_subsystem_set_keys", 00:05:29.550 "nvmf_subsystem_remove_host", 00:05:29.550 "nvmf_subsystem_add_host", 00:05:29.550 "nvmf_ns_remove_host", 00:05:29.550 "nvmf_ns_add_host", 00:05:29.550 "nvmf_subsystem_remove_ns", 00:05:29.550 "nvmf_subsystem_set_ns_ana_group", 00:05:29.550 "nvmf_subsystem_add_ns", 00:05:29.550 "nvmf_subsystem_listener_set_ana_state", 00:05:29.550 "nvmf_discovery_get_referrals", 00:05:29.550 "nvmf_discovery_remove_referral", 00:05:29.550 "nvmf_discovery_add_referral", 00:05:29.550 "nvmf_subsystem_remove_listener", 00:05:29.550 "nvmf_subsystem_add_listener", 00:05:29.550 "nvmf_delete_subsystem", 00:05:29.550 "nvmf_create_subsystem", 00:05:29.550 "nvmf_get_subsystems", 00:05:29.550 "env_dpdk_get_mem_stats", 00:05:29.550 "nbd_get_disks", 00:05:29.550 "nbd_stop_disk", 00:05:29.550 "nbd_start_disk", 00:05:29.550 "ublk_recover_disk", 00:05:29.550 "ublk_get_disks", 00:05:29.550 "ublk_stop_disk", 00:05:29.550 "ublk_start_disk", 00:05:29.550 "ublk_destroy_target", 00:05:29.550 "ublk_create_target", 00:05:29.550 "virtio_blk_create_transport", 00:05:29.550 "virtio_blk_get_transports", 00:05:29.550 "vhost_controller_set_coalescing", 00:05:29.550 "vhost_get_controllers", 00:05:29.550 "vhost_delete_controller", 00:05:29.550 "vhost_create_blk_controller", 00:05:29.550 "vhost_scsi_controller_remove_target", 00:05:29.550 "vhost_scsi_controller_add_target", 00:05:29.550 "vhost_start_scsi_controller", 00:05:29.550 "vhost_create_scsi_controller", 00:05:29.550 "thread_set_cpumask", 00:05:29.550 "scheduler_set_options", 00:05:29.550 "framework_get_governor", 00:05:29.550 "framework_get_scheduler", 00:05:29.550 "framework_set_scheduler", 00:05:29.550 "framework_get_reactors", 00:05:29.550 "thread_get_io_channels", 00:05:29.550 "thread_get_pollers", 00:05:29.550 "thread_get_stats", 00:05:29.550 "framework_monitor_context_switch", 00:05:29.550 "spdk_kill_instance", 00:05:29.550 "log_enable_timestamps", 00:05:29.550 "log_get_flags", 00:05:29.550 "log_clear_flag", 00:05:29.550 "log_set_flag", 00:05:29.550 "log_get_level", 00:05:29.550 "log_set_level", 00:05:29.550 "log_get_print_level", 00:05:29.550 "log_set_print_level", 00:05:29.550 "framework_enable_cpumask_locks", 00:05:29.550 "framework_disable_cpumask_locks", 00:05:29.550 "framework_wait_init", 00:05:29.550 "framework_start_init", 00:05:29.550 "scsi_get_devices", 00:05:29.550 "bdev_get_histogram", 00:05:29.550 "bdev_enable_histogram", 00:05:29.550 "bdev_set_qos_limit", 00:05:29.550 "bdev_set_qd_sampling_period", 00:05:29.550 "bdev_get_bdevs", 00:05:29.550 "bdev_reset_iostat", 00:05:29.550 "bdev_get_iostat", 00:05:29.550 "bdev_examine", 00:05:29.550 "bdev_wait_for_examine", 00:05:29.550 "bdev_set_options", 00:05:29.550 "accel_get_stats", 00:05:29.550 "accel_set_options", 00:05:29.550 "accel_set_driver", 00:05:29.550 "accel_crypto_key_destroy", 00:05:29.550 "accel_crypto_keys_get", 00:05:29.550 "accel_crypto_key_create", 00:05:29.550 "accel_assign_opc", 00:05:29.550 "accel_get_module_info", 00:05:29.550 "accel_get_opc_assignments", 00:05:29.550 "vmd_rescan", 00:05:29.550 "vmd_remove_device", 00:05:29.550 "vmd_enable", 00:05:29.550 "sock_get_default_impl", 00:05:29.550 "sock_set_default_impl", 00:05:29.550 "sock_impl_set_options", 00:05:29.550 "sock_impl_get_options", 00:05:29.550 "iobuf_get_stats", 00:05:29.550 "iobuf_set_options", 00:05:29.550 "keyring_get_keys", 00:05:29.550 "vfu_tgt_set_base_path", 00:05:29.550 "framework_get_pci_devices", 00:05:29.550 "framework_get_config", 00:05:29.550 "framework_get_subsystems", 00:05:29.550 "fsdev_set_opts", 00:05:29.550 "fsdev_get_opts", 00:05:29.550 "trace_get_info", 00:05:29.550 "trace_get_tpoint_group_mask", 00:05:29.550 "trace_disable_tpoint_group", 00:05:29.550 "trace_enable_tpoint_group", 00:05:29.550 "trace_clear_tpoint_mask", 00:05:29.550 "trace_set_tpoint_mask", 00:05:29.550 "notify_get_notifications", 00:05:29.550 "notify_get_types", 00:05:29.550 "spdk_get_version", 00:05:29.550 "rpc_get_methods" 00:05:29.550 ] 00:05:29.551 09:39:58 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:29.551 09:39:58 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.551 09:39:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.551 09:39:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:29.551 09:39:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1044996 00:05:29.551 09:39:58 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1044996 ']' 00:05:29.551 09:39:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1044996 00:05:29.551 09:39:58 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:29.551 09:39:58 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.551 09:39:58 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1044996 00:05:29.551 09:39:58 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.551 09:39:58 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.551 09:39:58 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1044996' 00:05:29.551 killing process with pid 1044996 00:05:29.551 09:39:58 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1044996 00:05:29.551 09:39:58 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1044996 00:05:30.118 00:05:30.118 real 0m1.114s 00:05:30.118 user 0m1.851s 00:05:30.118 sys 0m0.436s 00:05:30.118 09:39:58 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.118 09:39:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.118 ************************************ 00:05:30.118 END TEST spdkcli_tcp 00:05:30.118 ************************************ 00:05:30.118 09:39:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.118 09:39:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.118 09:39:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.118 09:39:58 -- common/autotest_common.sh@10 -- # set +x 00:05:30.118 ************************************ 00:05:30.118 START TEST dpdk_mem_utility 00:05:30.118 ************************************ 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.118 * Looking for test storage... 00:05:30.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.118 09:39:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:30.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.118 --rc genhtml_branch_coverage=1 00:05:30.118 --rc genhtml_function_coverage=1 00:05:30.118 --rc genhtml_legend=1 00:05:30.118 --rc geninfo_all_blocks=1 00:05:30.118 --rc geninfo_unexecuted_blocks=1 00:05:30.118 00:05:30.118 ' 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:30.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.118 --rc genhtml_branch_coverage=1 00:05:30.118 --rc genhtml_function_coverage=1 00:05:30.118 --rc genhtml_legend=1 00:05:30.118 --rc geninfo_all_blocks=1 00:05:30.118 --rc geninfo_unexecuted_blocks=1 00:05:30.118 00:05:30.118 ' 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:30.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.118 --rc genhtml_branch_coverage=1 00:05:30.118 --rc genhtml_function_coverage=1 00:05:30.118 --rc genhtml_legend=1 00:05:30.118 --rc geninfo_all_blocks=1 00:05:30.118 --rc geninfo_unexecuted_blocks=1 00:05:30.118 00:05:30.118 ' 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:30.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.118 --rc genhtml_branch_coverage=1 00:05:30.118 --rc genhtml_function_coverage=1 00:05:30.118 --rc genhtml_legend=1 00:05:30.118 --rc geninfo_all_blocks=1 00:05:30.118 --rc geninfo_unexecuted_blocks=1 00:05:30.118 00:05:30.118 ' 00:05:30.118 09:39:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:30.118 09:39:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1045300 00:05:30.118 09:39:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1045300 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1045300 ']' 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.118 09:39:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.118 09:39:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.118 [2024-12-07 09:39:58.818769] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:30.118 [2024-12-07 09:39:58.818818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045300 ] 00:05:30.377 [2024-12-07 09:39:58.873197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.377 [2024-12-07 09:39:58.914347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.634 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.634 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:30.634 09:39:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:30.634 09:39:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:30.634 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.634 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.634 { 00:05:30.634 "filename": "/tmp/spdk_mem_dump.txt" 00:05:30.634 } 00:05:30.634 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.634 09:39:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:30.634 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:30.634 1 heaps totaling size 860.000000 MiB 00:05:30.634 size: 860.000000 MiB heap id: 0 00:05:30.634 end heaps---------- 00:05:30.634 9 mempools totaling size 642.649841 MiB 00:05:30.634 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:30.634 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:30.634 size: 92.545471 MiB name: bdev_io_1045300 00:05:30.634 size: 51.011292 MiB name: evtpool_1045300 00:05:30.634 size: 50.003479 MiB name: msgpool_1045300 00:05:30.634 size: 36.509338 MiB name: fsdev_io_1045300 00:05:30.634 size: 21.763794 MiB name: PDU_Pool 00:05:30.635 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:30.635 size: 0.026123 MiB name: Session_Pool 00:05:30.635 end mempools------- 00:05:30.635 6 memzones totaling size 4.142822 MiB 00:05:30.635 size: 1.000366 MiB name: RG_ring_0_1045300 00:05:30.635 size: 1.000366 MiB name: RG_ring_1_1045300 00:05:30.635 size: 1.000366 MiB name: RG_ring_4_1045300 00:05:30.635 size: 1.000366 MiB name: RG_ring_5_1045300 00:05:30.635 size: 0.125366 MiB name: RG_ring_2_1045300 00:05:30.635 size: 0.015991 MiB name: RG_ring_3_1045300 00:05:30.635 end memzones------- 00:05:30.635 09:39:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:30.635 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:30.635 list of free elements. size: 13.984680 MiB 00:05:30.635 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:30.635 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:30.635 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:30.635 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:30.635 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:30.635 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:30.635 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:30.635 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:30.635 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:30.635 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:30.635 element at address: 0x200003e00000 with size: 0.495605 MiB 00:05:30.635 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:30.635 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:30.635 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:30.635 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:30.635 element at address: 0x200003a00000 with size: 0.354858 MiB 00:05:30.635 list of standard malloc elements. size: 199.218628 MiB 00:05:30.635 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:30.635 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:30.635 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:30.635 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:30.635 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:30.635 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:30.635 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:30.635 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:30.635 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:30.635 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:30.635 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:30.635 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:30.635 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:30.635 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:30.635 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:30.635 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:30.635 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:05:30.635 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:30.635 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:05:30.635 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:30.635 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:30.635 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:30.635 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:30.635 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:30.635 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:30.635 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:30.635 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:30.635 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:30.635 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:30.635 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:30.635 list of memzone associated elements. size: 646.796692 MiB 00:05:30.635 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:30.635 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:30.635 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:30.635 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:30.635 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:30.635 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1045300_0 00:05:30.635 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:30.635 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1045300_0 00:05:30.635 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:30.635 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1045300_0 00:05:30.635 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:30.635 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1045300_0 00:05:30.635 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:30.635 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:30.635 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:30.635 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:30.635 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:30.635 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1045300 00:05:30.635 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:30.635 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1045300 00:05:30.635 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:30.635 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1045300 00:05:30.635 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:30.635 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:30.635 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:30.635 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:30.635 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:30.635 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:30.635 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:30.635 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:30.635 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:30.635 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1045300 00:05:30.635 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:30.635 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1045300 00:05:30.635 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:30.635 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1045300 00:05:30.635 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:30.635 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1045300 00:05:30.635 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:30.635 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1045300 00:05:30.635 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:30.635 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1045300 00:05:30.635 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:30.635 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:30.635 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:30.635 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:30.635 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:30.635 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:30.635 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:05:30.635 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1045300 00:05:30.635 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:30.635 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:30.635 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:30.635 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:30.635 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:05:30.635 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1045300 00:05:30.635 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:30.635 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:30.635 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:30.635 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1045300 00:05:30.635 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:30.635 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1045300 00:05:30.635 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:05:30.635 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1045300 00:05:30.635 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:30.635 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:30.635 09:39:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:30.635 09:39:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1045300 00:05:30.635 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1045300 ']' 00:05:30.635 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1045300 00:05:30.635 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:30.635 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.635 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1045300 00:05:30.635 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.635 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.635 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1045300' 00:05:30.635 killing process with pid 1045300 00:05:30.635 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1045300 00:05:30.635 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1045300 00:05:30.893 00:05:30.893 real 0m0.973s 00:05:30.893 user 0m0.896s 00:05:30.893 sys 0m0.414s 00:05:30.893 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.893 09:39:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.893 ************************************ 00:05:30.893 END TEST dpdk_mem_utility 00:05:30.893 ************************************ 00:05:30.893 09:39:59 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:30.893 09:39:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.893 09:39:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.893 09:39:59 -- common/autotest_common.sh@10 -- # set +x 00:05:31.152 ************************************ 00:05:31.152 START TEST event 00:05:31.152 ************************************ 00:05:31.152 09:39:59 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:31.152 * Looking for test storage... 00:05:31.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:31.152 09:39:59 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:31.152 09:39:59 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:31.152 09:39:59 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:31.152 09:39:59 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:31.152 09:39:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.152 09:39:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.152 09:39:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.152 09:39:59 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.152 09:39:59 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.152 09:39:59 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.152 09:39:59 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.152 09:39:59 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.152 09:39:59 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.152 09:39:59 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.152 09:39:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.152 09:39:59 event -- scripts/common.sh@344 -- # case "$op" in 00:05:31.152 09:39:59 event -- scripts/common.sh@345 -- # : 1 00:05:31.152 09:39:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.152 09:39:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.152 09:39:59 event -- scripts/common.sh@365 -- # decimal 1 00:05:31.152 09:39:59 event -- scripts/common.sh@353 -- # local d=1 00:05:31.152 09:39:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.152 09:39:59 event -- scripts/common.sh@355 -- # echo 1 00:05:31.152 09:39:59 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.152 09:39:59 event -- scripts/common.sh@366 -- # decimal 2 00:05:31.152 09:39:59 event -- scripts/common.sh@353 -- # local d=2 00:05:31.152 09:39:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.152 09:39:59 event -- scripts/common.sh@355 -- # echo 2 00:05:31.152 09:39:59 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.152 09:39:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.152 09:39:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.152 09:39:59 event -- scripts/common.sh@368 -- # return 0 00:05:31.152 09:39:59 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.152 09:39:59 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:31.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.152 --rc genhtml_branch_coverage=1 00:05:31.152 --rc genhtml_function_coverage=1 00:05:31.152 --rc genhtml_legend=1 00:05:31.152 --rc geninfo_all_blocks=1 00:05:31.152 --rc geninfo_unexecuted_blocks=1 00:05:31.152 00:05:31.152 ' 00:05:31.152 09:39:59 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:31.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.152 --rc genhtml_branch_coverage=1 00:05:31.152 --rc genhtml_function_coverage=1 00:05:31.152 --rc genhtml_legend=1 00:05:31.152 --rc geninfo_all_blocks=1 00:05:31.152 --rc geninfo_unexecuted_blocks=1 00:05:31.152 00:05:31.152 ' 00:05:31.152 09:39:59 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:31.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.152 --rc genhtml_branch_coverage=1 00:05:31.152 --rc genhtml_function_coverage=1 00:05:31.152 --rc genhtml_legend=1 00:05:31.152 --rc geninfo_all_blocks=1 00:05:31.152 --rc geninfo_unexecuted_blocks=1 00:05:31.152 00:05:31.152 ' 00:05:31.152 09:39:59 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:31.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.152 --rc genhtml_branch_coverage=1 00:05:31.152 --rc genhtml_function_coverage=1 00:05:31.153 --rc genhtml_legend=1 00:05:31.153 --rc geninfo_all_blocks=1 00:05:31.153 --rc geninfo_unexecuted_blocks=1 00:05:31.153 00:05:31.153 ' 00:05:31.153 09:39:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:31.153 09:39:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.153 09:39:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.153 09:39:59 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:31.153 09:39:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.153 09:39:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.153 ************************************ 00:05:31.153 START TEST event_perf 00:05:31.153 ************************************ 00:05:31.153 09:39:59 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.153 Running I/O for 1 seconds...[2024-12-07 09:39:59.876190] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:31.153 [2024-12-07 09:39:59.876259] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045590 ] 00:05:31.411 [2024-12-07 09:39:59.935437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.411 [2024-12-07 09:39:59.977809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.411 [2024-12-07 09:39:59.977906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.411 [2024-12-07 09:39:59.977970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.411 [2024-12-07 09:39:59.977996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.348 Running I/O for 1 seconds... 00:05:32.348 lcore 0: 209714 00:05:32.348 lcore 1: 209712 00:05:32.348 lcore 2: 209712 00:05:32.348 lcore 3: 209713 00:05:32.348 done. 00:05:32.348 00:05:32.348 real 0m1.189s 00:05:32.348 user 0m4.100s 00:05:32.348 sys 0m0.086s 00:05:32.348 09:40:01 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.348 09:40:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.348 ************************************ 00:05:32.348 END TEST event_perf 00:05:32.348 ************************************ 00:05:32.606 09:40:01 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:32.606 09:40:01 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:32.606 09:40:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.606 09:40:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.606 ************************************ 00:05:32.606 START TEST event_reactor 00:05:32.606 ************************************ 00:05:32.606 09:40:01 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:32.606 [2024-12-07 09:40:01.134033] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:32.606 [2024-12-07 09:40:01.134099] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045775 ] 00:05:32.606 [2024-12-07 09:40:01.192783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.606 [2024-12-07 09:40:01.231978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.983 test_start 00:05:33.983 oneshot 00:05:33.983 tick 100 00:05:33.983 tick 100 00:05:33.983 tick 250 00:05:33.983 tick 100 00:05:33.983 tick 100 00:05:33.983 tick 250 00:05:33.983 tick 100 00:05:33.983 tick 500 00:05:33.983 tick 100 00:05:33.983 tick 100 00:05:33.983 tick 250 00:05:33.983 tick 100 00:05:33.983 tick 100 00:05:33.983 test_end 00:05:33.983 00:05:33.984 real 0m1.176s 00:05:33.984 user 0m1.095s 00:05:33.984 sys 0m0.077s 00:05:33.984 09:40:02 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.984 09:40:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:33.984 ************************************ 00:05:33.984 END TEST event_reactor 00:05:33.984 ************************************ 00:05:33.984 09:40:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:33.984 09:40:02 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:33.984 09:40:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.984 09:40:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.984 ************************************ 00:05:33.984 START TEST event_reactor_perf 00:05:33.984 ************************************ 00:05:33.984 09:40:02 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:33.984 [2024-12-07 09:40:02.379285] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:33.984 [2024-12-07 09:40:02.379352] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045965 ] 00:05:33.984 [2024-12-07 09:40:02.436610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.984 [2024-12-07 09:40:02.475310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.933 test_start 00:05:34.933 test_end 00:05:34.933 Performance: 508985 events per second 00:05:34.933 00:05:34.933 real 0m1.174s 00:05:34.933 user 0m1.096s 00:05:34.933 sys 0m0.074s 00:05:34.933 09:40:03 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.933 09:40:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.933 ************************************ 00:05:34.933 END TEST event_reactor_perf 00:05:34.933 ************************************ 00:05:34.933 09:40:03 event -- event/event.sh@49 -- # uname -s 00:05:34.933 09:40:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:34.933 09:40:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:34.933 09:40:03 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.933 09:40:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.933 09:40:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.933 ************************************ 00:05:34.933 START TEST event_scheduler 00:05:34.933 ************************************ 00:05:34.933 09:40:03 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:35.192 * Looking for test storage... 00:05:35.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:35.192 09:40:03 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.192 09:40:03 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.192 09:40:03 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.192 09:40:03 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.193 09:40:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:35.193 09:40:03 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.193 09:40:03 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.193 --rc genhtml_branch_coverage=1 00:05:35.193 --rc genhtml_function_coverage=1 00:05:35.193 --rc genhtml_legend=1 00:05:35.193 --rc geninfo_all_blocks=1 00:05:35.193 --rc geninfo_unexecuted_blocks=1 00:05:35.193 00:05:35.193 ' 00:05:35.193 09:40:03 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.193 --rc genhtml_branch_coverage=1 00:05:35.193 --rc genhtml_function_coverage=1 00:05:35.193 --rc genhtml_legend=1 00:05:35.193 --rc geninfo_all_blocks=1 00:05:35.193 --rc geninfo_unexecuted_blocks=1 00:05:35.193 00:05:35.193 ' 00:05:35.193 09:40:03 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.193 --rc genhtml_branch_coverage=1 00:05:35.193 --rc genhtml_function_coverage=1 00:05:35.193 --rc genhtml_legend=1 00:05:35.193 --rc geninfo_all_blocks=1 00:05:35.193 --rc geninfo_unexecuted_blocks=1 00:05:35.193 00:05:35.193 ' 00:05:35.193 09:40:03 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.193 --rc genhtml_branch_coverage=1 00:05:35.193 --rc genhtml_function_coverage=1 00:05:35.193 --rc genhtml_legend=1 00:05:35.193 --rc geninfo_all_blocks=1 00:05:35.193 --rc geninfo_unexecuted_blocks=1 00:05:35.193 00:05:35.193 ' 00:05:35.193 09:40:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:35.193 09:40:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1046279 00:05:35.193 09:40:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:35.193 09:40:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.193 09:40:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1046279 00:05:35.193 09:40:03 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1046279 ']' 00:05:35.193 09:40:03 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.193 09:40:03 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.193 09:40:03 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.193 09:40:03 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.193 09:40:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.193 [2024-12-07 09:40:03.812615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:35.193 [2024-12-07 09:40:03.812678] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046279 ] 00:05:35.193 [2024-12-07 09:40:03.865429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.193 [2024-12-07 09:40:03.907976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.193 [2024-12-07 09:40:03.908015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.193 [2024-12-07 09:40:03.908103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.193 [2024-12-07 09:40:03.908105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.453 09:40:03 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.453 09:40:03 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:35.453 09:40:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:35.453 09:40:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.453 09:40:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.453 [2024-12-07 09:40:03.988753] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:35.453 [2024-12-07 09:40:03.988773] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:35.453 [2024-12-07 09:40:03.988782] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:35.453 [2024-12-07 09:40:03.988788] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:35.453 [2024-12-07 09:40:03.988793] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:35.453 09:40:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.453 09:40:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:35.453 09:40:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.453 09:40:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.453 [2024-12-07 09:40:04.057277] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:35.453 09:40:04 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.453 09:40:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:35.453 09:40:04 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.453 09:40:04 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.453 09:40:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.453 ************************************ 00:05:35.453 START TEST scheduler_create_thread 00:05:35.453 ************************************ 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.453 2 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.453 3 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.453 4 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.453 5 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.453 6 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.453 7 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.453 8 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.453 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.712 9 00:05:35.712 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.712 09:40:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:35.712 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.712 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.712 10 00:05:35.712 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.712 09:40:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:35.713 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.713 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.972 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.972 09:40:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:35.972 09:40:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:35.972 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.972 09:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.921 09:40:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.921 09:40:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:36.921 09:40:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.921 09:40:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.857 09:40:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.857 09:40:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.857 09:40:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.857 09:40:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.857 09:40:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.793 09:40:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.793 00:05:38.793 real 0m3.231s 00:05:38.793 user 0m0.026s 00:05:38.793 sys 0m0.004s 00:05:38.793 09:40:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.793 09:40:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.793 ************************************ 00:05:38.793 END TEST scheduler_create_thread 00:05:38.793 ************************************ 00:05:38.793 09:40:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.793 09:40:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1046279 00:05:38.793 09:40:07 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1046279 ']' 00:05:38.793 09:40:07 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1046279 00:05:38.793 09:40:07 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:38.793 09:40:07 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.793 09:40:07 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1046279 00:05:38.793 09:40:07 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:38.793 09:40:07 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:38.793 09:40:07 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1046279' 00:05:38.793 killing process with pid 1046279 00:05:38.793 09:40:07 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1046279 00:05:38.793 09:40:07 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1046279 00:05:39.052 [2024-12-07 09:40:07.706330] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:39.311 00:05:39.311 real 0m4.358s 00:05:39.311 user 0m7.768s 00:05:39.311 sys 0m0.351s 00:05:39.311 09:40:07 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.311 09:40:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.311 ************************************ 00:05:39.311 END TEST event_scheduler 00:05:39.311 ************************************ 00:05:39.311 09:40:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:39.311 09:40:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:39.311 09:40:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.311 09:40:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.311 09:40:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.311 ************************************ 00:05:39.311 START TEST app_repeat 00:05:39.311 ************************************ 00:05:39.311 09:40:08 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1047101 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1047101' 00:05:39.311 Process app_repeat pid: 1047101 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:39.311 spdk_app_start Round 0 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1047101 /var/tmp/spdk-nbd.sock 00:05:39.311 09:40:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:39.311 09:40:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1047101 ']' 00:05:39.311 09:40:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.311 09:40:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.311 09:40:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.311 09:40:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.311 09:40:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.570 [2024-12-07 09:40:08.051409] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:39.570 [2024-12-07 09:40:08.051459] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047101 ] 00:05:39.570 [2024-12-07 09:40:08.105896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.570 [2024-12-07 09:40:08.148786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.570 [2024-12-07 09:40:08.148791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.570 09:40:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.570 09:40:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:39.570 09:40:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.829 Malloc0 00:05:39.830 09:40:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.089 Malloc1 00:05:40.089 09:40:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.089 09:40:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.348 /dev/nbd0 00:05:40.348 09:40:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.348 09:40:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.348 1+0 records in 00:05:40.348 1+0 records out 00:05:40.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207292 s, 19.8 MB/s 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:40.348 09:40:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:40.348 09:40:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.348 09:40:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.348 09:40:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.348 /dev/nbd1 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.607 1+0 records in 00:05:40.607 1+0 records out 00:05:40.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199339 s, 20.5 MB/s 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:40.607 09:40:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.607 { 00:05:40.607 "nbd_device": "/dev/nbd0", 00:05:40.607 "bdev_name": "Malloc0" 00:05:40.607 }, 00:05:40.607 { 00:05:40.607 "nbd_device": "/dev/nbd1", 00:05:40.607 "bdev_name": "Malloc1" 00:05:40.607 } 00:05:40.607 ]' 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.607 { 00:05:40.607 "nbd_device": "/dev/nbd0", 00:05:40.607 "bdev_name": "Malloc0" 00:05:40.607 }, 00:05:40.607 { 00:05:40.607 "nbd_device": "/dev/nbd1", 00:05:40.607 "bdev_name": "Malloc1" 00:05:40.607 } 00:05:40.607 ]' 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.607 /dev/nbd1' 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.607 /dev/nbd1' 00:05:40.607 09:40:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.866 09:40:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.866 09:40:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.866 09:40:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.866 09:40:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.867 256+0 records in 00:05:40.867 256+0 records out 00:05:40.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106625 s, 98.3 MB/s 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.867 256+0 records in 00:05:40.867 256+0 records out 00:05:40.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135589 s, 77.3 MB/s 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.867 256+0 records in 00:05:40.867 256+0 records out 00:05:40.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146477 s, 71.6 MB/s 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.867 09:40:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.126 09:40:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.386 09:40:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.386 09:40:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.386 09:40:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.386 09:40:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.386 09:40:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.386 09:40:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.386 09:40:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.386 09:40:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.386 09:40:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.386 09:40:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.386 09:40:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.386 09:40:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.386 09:40:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.645 09:40:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.905 [2024-12-07 09:40:10.443798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.905 [2024-12-07 09:40:10.482122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.905 [2024-12-07 09:40:10.482125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.905 [2024-12-07 09:40:10.523267] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.905 [2024-12-07 09:40:10.523308] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.192 09:40:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.192 09:40:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:45.192 spdk_app_start Round 1 00:05:45.192 09:40:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1047101 /var/tmp/spdk-nbd.sock 00:05:45.192 09:40:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1047101 ']' 00:05:45.192 09:40:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.192 09:40:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.192 09:40:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.192 09:40:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.192 09:40:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.192 09:40:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.192 09:40:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:45.192 09:40:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.192 Malloc0 00:05:45.192 09:40:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.192 Malloc1 00:05:45.192 09:40:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.192 09:40:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.451 /dev/nbd0 00:05:45.451 09:40:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.451 09:40:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.451 1+0 records in 00:05:45.451 1+0 records out 00:05:45.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191624 s, 21.4 MB/s 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.451 09:40:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:45.451 09:40:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.451 09:40:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.451 09:40:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.710 /dev/nbd1 00:05:45.710 09:40:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.710 09:40:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.710 1+0 records in 00:05:45.710 1+0 records out 00:05:45.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234009 s, 17.5 MB/s 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.710 09:40:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:45.710 09:40:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.710 09:40:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.710 09:40:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.710 09:40:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.710 09:40:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.970 { 00:05:45.970 "nbd_device": "/dev/nbd0", 00:05:45.970 "bdev_name": "Malloc0" 00:05:45.970 }, 00:05:45.970 { 00:05:45.970 "nbd_device": "/dev/nbd1", 00:05:45.970 "bdev_name": "Malloc1" 00:05:45.970 } 00:05:45.970 ]' 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.970 { 00:05:45.970 "nbd_device": "/dev/nbd0", 00:05:45.970 "bdev_name": "Malloc0" 00:05:45.970 }, 00:05:45.970 { 00:05:45.970 "nbd_device": "/dev/nbd1", 00:05:45.970 "bdev_name": "Malloc1" 00:05:45.970 } 00:05:45.970 ]' 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.970 /dev/nbd1' 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.970 /dev/nbd1' 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.970 256+0 records in 00:05:45.970 256+0 records out 00:05:45.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101166 s, 104 MB/s 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.970 256+0 records in 00:05:45.970 256+0 records out 00:05:45.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01363 s, 76.9 MB/s 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.970 256+0 records in 00:05:45.970 256+0 records out 00:05:45.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146953 s, 71.4 MB/s 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.970 09:40:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.229 09:40:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.229 09:40:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.229 09:40:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.229 09:40:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.229 09:40:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.229 09:40:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.229 09:40:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.229 09:40:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.229 09:40:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.229 09:40:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.488 09:40:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.488 09:40:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.488 09:40:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.488 09:40:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.488 09:40:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.488 09:40:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.488 09:40:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.488 09:40:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.488 09:40:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.488 09:40:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.488 09:40:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.748 09:40:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.748 09:40:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.748 09:40:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.748 09:40:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.748 09:40:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.748 09:40:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.748 09:40:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.748 09:40:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.748 09:40:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.748 09:40:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.748 09:40:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.748 09:40:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.748 09:40:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.006 09:40:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.006 [2024-12-07 09:40:15.710100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.264 [2024-12-07 09:40:15.747191] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.265 [2024-12-07 09:40:15.747195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.265 [2024-12-07 09:40:15.788770] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.265 [2024-12-07 09:40:15.788811] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.548 09:40:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.548 09:40:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.548 spdk_app_start Round 2 00:05:50.548 09:40:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1047101 /var/tmp/spdk-nbd.sock 00:05:50.548 09:40:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1047101 ']' 00:05:50.548 09:40:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.548 09:40:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.548 09:40:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.548 09:40:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.548 09:40:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.548 09:40:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.548 09:40:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:50.548 09:40:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.548 Malloc0 00:05:50.548 09:40:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.548 Malloc1 00:05:50.548 09:40:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.548 09:40:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.837 /dev/nbd0 00:05:50.837 09:40:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.837 09:40:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.837 1+0 records in 00:05:50.837 1+0 records out 00:05:50.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212258 s, 19.3 MB/s 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:50.837 09:40:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:50.837 09:40:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.837 09:40:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.837 09:40:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.095 /dev/nbd1 00:05:51.095 09:40:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.095 09:40:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.095 1+0 records in 00:05:51.095 1+0 records out 00:05:51.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214849 s, 19.1 MB/s 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:51.095 09:40:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:51.095 09:40:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.095 09:40:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.095 09:40:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.095 09:40:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.095 09:40:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.354 { 00:05:51.354 "nbd_device": "/dev/nbd0", 00:05:51.354 "bdev_name": "Malloc0" 00:05:51.354 }, 00:05:51.354 { 00:05:51.354 "nbd_device": "/dev/nbd1", 00:05:51.354 "bdev_name": "Malloc1" 00:05:51.354 } 00:05:51.354 ]' 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.354 { 00:05:51.354 "nbd_device": "/dev/nbd0", 00:05:51.354 "bdev_name": "Malloc0" 00:05:51.354 }, 00:05:51.354 { 00:05:51.354 "nbd_device": "/dev/nbd1", 00:05:51.354 "bdev_name": "Malloc1" 00:05:51.354 } 00:05:51.354 ]' 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.354 /dev/nbd1' 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.354 /dev/nbd1' 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.354 256+0 records in 00:05:51.354 256+0 records out 00:05:51.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106669 s, 98.3 MB/s 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.354 256+0 records in 00:05:51.354 256+0 records out 00:05:51.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139791 s, 75.0 MB/s 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.354 256+0 records in 00:05:51.354 256+0 records out 00:05:51.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146805 s, 71.4 MB/s 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.354 09:40:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.612 09:40:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.612 09:40:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.612 09:40:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.612 09:40:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.612 09:40:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.612 09:40:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.612 09:40:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.612 09:40:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.612 09:40:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.612 09:40:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.871 09:40:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.129 09:40:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.129 09:40:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.129 09:40:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.129 09:40:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.129 09:40:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.129 09:40:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.129 09:40:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.129 09:40:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.129 09:40:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.129 09:40:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.129 09:40:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.387 [2024-12-07 09:40:20.994206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.387 [2024-12-07 09:40:21.030647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.387 [2024-12-07 09:40:21.030651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.387 [2024-12-07 09:40:21.071675] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.387 [2024-12-07 09:40:21.071716] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.674 09:40:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1047101 /var/tmp/spdk-nbd.sock 00:05:55.675 09:40:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1047101 ']' 00:05:55.675 09:40:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.675 09:40:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.675 09:40:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.675 09:40:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.675 09:40:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:55.675 09:40:24 event.app_repeat -- event/event.sh@39 -- # killprocess 1047101 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1047101 ']' 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1047101 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1047101 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1047101' 00:05:55.675 killing process with pid 1047101 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1047101 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1047101 00:05:55.675 spdk_app_start is called in Round 0. 00:05:55.675 Shutdown signal received, stop current app iteration 00:05:55.675 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:05:55.675 spdk_app_start is called in Round 1. 00:05:55.675 Shutdown signal received, stop current app iteration 00:05:55.675 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:05:55.675 spdk_app_start is called in Round 2. 00:05:55.675 Shutdown signal received, stop current app iteration 00:05:55.675 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:05:55.675 spdk_app_start is called in Round 3. 00:05:55.675 Shutdown signal received, stop current app iteration 00:05:55.675 09:40:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:55.675 09:40:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:55.675 00:05:55.675 real 0m16.210s 00:05:55.675 user 0m35.463s 00:05:55.675 sys 0m2.561s 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.675 09:40:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.675 ************************************ 00:05:55.675 END TEST app_repeat 00:05:55.675 ************************************ 00:05:55.675 09:40:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:55.675 09:40:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.675 09:40:24 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.675 09:40:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.675 09:40:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.675 ************************************ 00:05:55.675 START TEST cpu_locks 00:05:55.675 ************************************ 00:05:55.675 09:40:24 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.675 * Looking for test storage... 00:05:55.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:55.675 09:40:24 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.675 09:40:24 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.675 09:40:24 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.934 09:40:24 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:55.934 09:40:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.935 09:40:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:55.935 09:40:24 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.935 09:40:24 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.935 --rc genhtml_branch_coverage=1 00:05:55.935 --rc genhtml_function_coverage=1 00:05:55.935 --rc genhtml_legend=1 00:05:55.935 --rc geninfo_all_blocks=1 00:05:55.935 --rc geninfo_unexecuted_blocks=1 00:05:55.935 00:05:55.935 ' 00:05:55.935 09:40:24 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.935 --rc genhtml_branch_coverage=1 00:05:55.935 --rc genhtml_function_coverage=1 00:05:55.935 --rc genhtml_legend=1 00:05:55.935 --rc geninfo_all_blocks=1 00:05:55.935 --rc geninfo_unexecuted_blocks=1 00:05:55.935 00:05:55.935 ' 00:05:55.935 09:40:24 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.935 --rc genhtml_branch_coverage=1 00:05:55.935 --rc genhtml_function_coverage=1 00:05:55.935 --rc genhtml_legend=1 00:05:55.935 --rc geninfo_all_blocks=1 00:05:55.935 --rc geninfo_unexecuted_blocks=1 00:05:55.935 00:05:55.935 ' 00:05:55.935 09:40:24 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.935 --rc genhtml_branch_coverage=1 00:05:55.935 --rc genhtml_function_coverage=1 00:05:55.935 --rc genhtml_legend=1 00:05:55.935 --rc geninfo_all_blocks=1 00:05:55.935 --rc geninfo_unexecuted_blocks=1 00:05:55.935 00:05:55.935 ' 00:05:55.935 09:40:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:55.935 09:40:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:55.935 09:40:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:55.935 09:40:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:55.935 09:40:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.935 09:40:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.935 09:40:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.935 ************************************ 00:05:55.935 START TEST default_locks 00:05:55.935 ************************************ 00:05:55.935 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:55.935 09:40:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1050057 00:05:55.935 09:40:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1050057 00:05:55.935 09:40:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.935 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1050057 ']' 00:05:55.935 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.935 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.935 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.935 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.935 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.935 [2024-12-07 09:40:24.536165] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:55.935 [2024-12-07 09:40:24.536211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050057 ] 00:05:55.935 [2024-12-07 09:40:24.590491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.935 [2024-12-07 09:40:24.631196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.194 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.194 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:56.194 09:40:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1050057 00:05:56.194 09:40:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1050057 00:05:56.194 09:40:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.454 lslocks: write error 00:05:56.454 09:40:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1050057 00:05:56.454 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1050057 ']' 00:05:56.454 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1050057 00:05:56.454 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:56.454 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.454 09:40:24 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1050057 00:05:56.454 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.454 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.454 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1050057' 00:05:56.454 killing process with pid 1050057 00:05:56.454 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1050057 00:05:56.454 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1050057 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1050057 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1050057 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1050057 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1050057 ']' 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1050057) - No such process 00:05:56.713 ERROR: process (pid: 1050057) is no longer running 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.713 00:05:56.713 real 0m0.873s 00:05:56.713 user 0m0.829s 00:05:56.713 sys 0m0.424s 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.713 09:40:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.713 ************************************ 00:05:56.713 END TEST default_locks 00:05:56.713 ************************************ 00:05:56.713 09:40:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:56.713 09:40:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.713 09:40:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.713 09:40:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.714 ************************************ 00:05:56.714 START TEST default_locks_via_rpc 00:05:56.714 ************************************ 00:05:56.714 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:56.714 09:40:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1050161 00:05:56.714 09:40:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1050161 00:05:56.714 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1050161 ']' 00:05:56.714 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.714 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.714 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.714 09:40:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.714 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.714 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.973 [2024-12-07 09:40:25.466624] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:56.973 [2024-12-07 09:40:25.466670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050161 ] 00:05:56.973 [2024-12-07 09:40:25.519666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.973 [2024-12-07 09:40:25.560326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1050161 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1050161 00:05:57.232 09:40:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.491 09:40:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1050161 00:05:57.491 09:40:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1050161 ']' 00:05:57.491 09:40:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1050161 00:05:57.491 09:40:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:57.491 09:40:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.491 09:40:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1050161 00:05:57.491 09:40:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.491 09:40:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.491 09:40:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1050161' 00:05:57.491 killing process with pid 1050161 00:05:57.491 09:40:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1050161 00:05:57.491 09:40:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1050161 00:05:57.751 00:05:57.751 real 0m0.993s 00:05:57.751 user 0m0.947s 00:05:57.751 sys 0m0.457s 00:05:57.751 09:40:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.751 09:40:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.751 ************************************ 00:05:57.751 END TEST default_locks_via_rpc 00:05:57.751 ************************************ 00:05:57.751 09:40:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:57.751 09:40:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.751 09:40:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.751 09:40:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.751 ************************************ 00:05:57.751 START TEST non_locking_app_on_locked_coremask 00:05:57.751 ************************************ 00:05:57.751 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:57.751 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1050415 00:05:57.751 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1050415 /var/tmp/spdk.sock 00:05:57.751 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1050415 ']' 00:05:57.751 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.751 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.751 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.751 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.751 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.751 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.010 [2024-12-07 09:40:26.522020] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:58.010 [2024-12-07 09:40:26.522058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050415 ] 00:05:58.010 [2024-12-07 09:40:26.574877] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.010 [2024-12-07 09:40:26.614535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.269 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.269 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:58.269 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1050418 00:05:58.269 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1050418 /var/tmp/spdk2.sock 00:05:58.269 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:58.269 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1050418 ']' 00:05:58.269 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.269 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.269 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.269 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.269 09:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.269 [2024-12-07 09:40:26.860078] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:58.269 [2024-12-07 09:40:26.860127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050418 ] 00:05:58.269 [2024-12-07 09:40:26.931833] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.269 [2024-12-07 09:40:26.931855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.528 [2024-12-07 09:40:27.016312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.097 09:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.097 09:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:59.097 09:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1050415 00:05:59.097 09:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.097 09:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1050415 00:05:59.662 lslocks: write error 00:05:59.662 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1050415 00:05:59.662 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1050415 ']' 00:05:59.662 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1050415 00:05:59.662 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:59.662 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.662 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1050415 00:05:59.662 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.662 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.662 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1050415' 00:05:59.662 killing process with pid 1050415 00:05:59.662 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1050415 00:05:59.662 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1050415 00:06:00.597 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1050418 00:06:00.597 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1050418 ']' 00:06:00.597 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1050418 00:06:00.597 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.597 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.597 09:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1050418 00:06:00.597 09:40:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.597 09:40:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.597 09:40:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1050418' 00:06:00.597 killing process with pid 1050418 00:06:00.597 09:40:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1050418 00:06:00.597 09:40:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1050418 00:06:00.856 00:06:00.856 real 0m2.876s 00:06:00.856 user 0m3.006s 00:06:00.856 sys 0m1.002s 00:06:00.856 09:40:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.856 09:40:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.856 ************************************ 00:06:00.856 END TEST non_locking_app_on_locked_coremask 00:06:00.856 ************************************ 00:06:00.856 09:40:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:00.856 09:40:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.856 09:40:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.856 09:40:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.856 ************************************ 00:06:00.856 START TEST locking_app_on_unlocked_coremask 00:06:00.856 ************************************ 00:06:00.856 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:00.856 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1050920 00:06:00.856 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1050920 /var/tmp/spdk.sock 00:06:00.856 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:00.856 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1050920 ']' 00:06:00.856 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.856 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.856 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.856 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.856 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.856 [2024-12-07 09:40:29.452320] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:00.856 [2024-12-07 09:40:29.452360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050920 ] 00:06:00.856 [2024-12-07 09:40:29.506270] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.856 [2024-12-07 09:40:29.506294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.856 [2024-12-07 09:40:29.548201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.115 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.115 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:01.115 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1050936 00:06:01.115 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1050936 /var/tmp/spdk2.sock 00:06:01.115 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.115 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1050936 ']' 00:06:01.115 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.115 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.115 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.115 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.115 09:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.115 [2024-12-07 09:40:29.796056] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:01.115 [2024-12-07 09:40:29.796106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050936 ] 00:06:01.373 [2024-12-07 09:40:29.873045] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.373 [2024-12-07 09:40:29.954517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.938 09:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.938 09:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:01.938 09:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1050936 00:06:01.938 09:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1050936 00:06:01.938 09:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.502 lslocks: write error 00:06:02.502 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1050920 00:06:02.502 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1050920 ']' 00:06:02.502 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1050920 00:06:02.502 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:02.502 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.502 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1050920 00:06:02.760 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.760 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.760 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1050920' 00:06:02.760 killing process with pid 1050920 00:06:02.760 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1050920 00:06:02.760 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1050920 00:06:03.325 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1050936 00:06:03.325 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1050936 ']' 00:06:03.325 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1050936 00:06:03.325 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:03.325 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.325 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1050936 00:06:03.325 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.325 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.326 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1050936' 00:06:03.326 killing process with pid 1050936 00:06:03.326 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1050936 00:06:03.326 09:40:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1050936 00:06:03.584 00:06:03.584 real 0m2.854s 00:06:03.584 user 0m2.994s 00:06:03.584 sys 0m0.988s 00:06:03.584 09:40:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.584 09:40:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.584 ************************************ 00:06:03.584 END TEST locking_app_on_unlocked_coremask 00:06:03.584 ************************************ 00:06:03.584 09:40:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:03.584 09:40:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.584 09:40:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.584 09:40:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.843 ************************************ 00:06:03.843 START TEST locking_app_on_locked_coremask 00:06:03.843 ************************************ 00:06:03.843 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:03.843 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1051418 00:06:03.843 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1051418 /var/tmp/spdk.sock 00:06:03.843 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.843 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1051418 ']' 00:06:03.843 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.843 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.843 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.843 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.843 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.843 [2024-12-07 09:40:32.380649] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:03.843 [2024-12-07 09:40:32.380694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051418 ] 00:06:03.843 [2024-12-07 09:40:32.434813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.843 [2024-12-07 09:40:32.472061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1051537 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1051537 /var/tmp/spdk2.sock 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1051537 /var/tmp/spdk2.sock 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1051537 /var/tmp/spdk2.sock 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1051537 ']' 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.102 09:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.102 [2024-12-07 09:40:32.726930] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:04.102 [2024-12-07 09:40:32.727016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051537 ] 00:06:04.102 [2024-12-07 09:40:32.817098] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1051418 has claimed it. 00:06:04.102 [2024-12-07 09:40:32.817140] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1051537) - No such process 00:06:04.670 ERROR: process (pid: 1051537) is no longer running 00:06:04.670 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.670 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:04.670 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:04.670 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.670 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:04.670 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.670 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1051418 00:06:04.670 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1051418 00:06:04.670 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.286 lslocks: write error 00:06:05.286 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1051418 00:06:05.286 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1051418 ']' 00:06:05.286 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1051418 00:06:05.286 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:05.286 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.286 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1051418 00:06:05.286 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.286 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.286 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1051418' 00:06:05.286 killing process with pid 1051418 00:06:05.286 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1051418 00:06:05.286 09:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1051418 00:06:05.579 00:06:05.579 real 0m1.828s 00:06:05.579 user 0m1.938s 00:06:05.579 sys 0m0.673s 00:06:05.579 09:40:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.579 09:40:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.579 ************************************ 00:06:05.579 END TEST locking_app_on_locked_coremask 00:06:05.579 ************************************ 00:06:05.579 09:40:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:05.579 09:40:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.579 09:40:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.579 09:40:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.579 ************************************ 00:06:05.579 START TEST locking_overlapped_coremask 00:06:05.579 ************************************ 00:06:05.579 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:05.579 09:40:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1051904 00:06:05.579 09:40:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1051904 /var/tmp/spdk.sock 00:06:05.579 09:40:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:05.579 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1051904 ']' 00:06:05.579 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.579 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.579 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.579 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.579 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.579 [2024-12-07 09:40:34.282590] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:05.579 [2024-12-07 09:40:34.282633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051904 ] 00:06:05.861 [2024-12-07 09:40:34.337274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.861 [2024-12-07 09:40:34.377863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.861 [2024-12-07 09:40:34.377969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.861 [2024-12-07 09:40:34.377975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.176 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.176 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:06.176 09:40:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1051914 00:06:06.176 09:40:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:06.176 09:40:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1051914 /var/tmp/spdk2.sock 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1051914 /var/tmp/spdk2.sock 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1051914 /var/tmp/spdk2.sock 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1051914 ']' 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.177 09:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.177 [2024-12-07 09:40:34.626318] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:06.177 [2024-12-07 09:40:34.626360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051914 ] 00:06:06.177 [2024-12-07 09:40:34.702689] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1051904 has claimed it. 00:06:06.177 [2024-12-07 09:40:34.702729] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1051914) - No such process 00:06:06.758 ERROR: process (pid: 1051914) is no longer running 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1051904 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1051904 ']' 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1051904 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1051904 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1051904' 00:06:06.758 killing process with pid 1051904 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1051904 00:06:06.758 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1051904 00:06:07.018 00:06:07.018 real 0m1.420s 00:06:07.018 user 0m3.892s 00:06:07.018 sys 0m0.384s 00:06:07.018 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.018 09:40:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.018 ************************************ 00:06:07.018 END TEST locking_overlapped_coremask 00:06:07.018 ************************************ 00:06:07.018 09:40:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.018 09:40:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.018 09:40:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.018 09:40:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.018 ************************************ 00:06:07.018 START TEST locking_overlapped_coremask_via_rpc 00:06:07.018 ************************************ 00:06:07.018 09:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:07.018 09:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1052173 00:06:07.018 09:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1052173 /var/tmp/spdk.sock 00:06:07.018 09:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.018 09:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1052173 ']' 00:06:07.018 09:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.018 09:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.018 09:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.018 09:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.018 09:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.278 [2024-12-07 09:40:35.771867] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:07.278 [2024-12-07 09:40:35.771914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1052173 ] 00:06:07.278 [2024-12-07 09:40:35.827362] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.278 [2024-12-07 09:40:35.827387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.278 [2024-12-07 09:40:35.869050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.278 [2024-12-07 09:40:35.869147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.278 [2024-12-07 09:40:35.869149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.537 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.537 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:07.537 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1052181 00:06:07.537 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1052181 /var/tmp/spdk2.sock 00:06:07.537 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:07.537 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1052181 ']' 00:06:07.537 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.537 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.537 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.537 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.537 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.537 [2024-12-07 09:40:36.113781] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:07.537 [2024-12-07 09:40:36.113825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1052181 ] 00:06:07.537 [2024-12-07 09:40:36.190511] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.537 [2024-12-07 09:40:36.190543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.796 [2024-12-07 09:40:36.276009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.796 [2024-12-07 09:40:36.276122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.796 [2024-12-07 09:40:36.276123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.363 [2024-12-07 09:40:36.984020] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1052173 has claimed it. 00:06:08.363 request: 00:06:08.363 { 00:06:08.363 "method": "framework_enable_cpumask_locks", 00:06:08.363 "req_id": 1 00:06:08.363 } 00:06:08.363 Got JSON-RPC error response 00:06:08.363 response: 00:06:08.363 { 00:06:08.363 "code": -32603, 00:06:08.363 "message": "Failed to claim CPU core: 2" 00:06:08.363 } 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1052173 /var/tmp/spdk.sock 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1052173 ']' 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.363 09:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.621 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.621 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:08.621 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1052181 /var/tmp/spdk2.sock 00:06:08.621 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1052181 ']' 00:06:08.621 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.621 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.621 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.621 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.621 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.880 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.880 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:08.880 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:08.880 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.880 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.880 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.880 00:06:08.880 real 0m1.676s 00:06:08.880 user 0m0.813s 00:06:08.880 sys 0m0.141s 00:06:08.880 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.880 09:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.880 ************************************ 00:06:08.880 END TEST locking_overlapped_coremask_via_rpc 00:06:08.880 ************************************ 00:06:08.880 09:40:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:08.880 09:40:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1052173 ]] 00:06:08.880 09:40:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1052173 00:06:08.880 09:40:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1052173 ']' 00:06:08.880 09:40:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1052173 00:06:08.880 09:40:37 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:08.880 09:40:37 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.880 09:40:37 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1052173 00:06:08.880 09:40:37 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.880 09:40:37 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.880 09:40:37 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1052173' 00:06:08.880 killing process with pid 1052173 00:06:08.880 09:40:37 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1052173 00:06:08.880 09:40:37 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1052173 00:06:09.138 09:40:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1052181 ]] 00:06:09.138 09:40:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1052181 00:06:09.138 09:40:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1052181 ']' 00:06:09.138 09:40:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1052181 00:06:09.138 09:40:37 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:09.138 09:40:37 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.138 09:40:37 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1052181 00:06:09.396 09:40:37 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:09.396 09:40:37 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:09.396 09:40:37 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1052181' 00:06:09.396 killing process with pid 1052181 00:06:09.396 09:40:37 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1052181 00:06:09.396 09:40:37 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1052181 00:06:09.654 09:40:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.654 09:40:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:09.654 09:40:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1052173 ]] 00:06:09.654 09:40:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1052173 00:06:09.654 09:40:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1052173 ']' 00:06:09.654 09:40:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1052173 00:06:09.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1052173) - No such process 00:06:09.654 09:40:38 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1052173 is not found' 00:06:09.654 Process with pid 1052173 is not found 00:06:09.654 09:40:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1052181 ]] 00:06:09.654 09:40:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1052181 00:06:09.654 09:40:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1052181 ']' 00:06:09.654 09:40:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1052181 00:06:09.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1052181) - No such process 00:06:09.654 09:40:38 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1052181 is not found' 00:06:09.654 Process with pid 1052181 is not found 00:06:09.654 09:40:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.654 00:06:09.654 real 0m13.906s 00:06:09.654 user 0m24.208s 00:06:09.654 sys 0m5.015s 00:06:09.654 09:40:38 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.654 09:40:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.654 ************************************ 00:06:09.654 END TEST cpu_locks 00:06:09.654 ************************************ 00:06:09.654 00:06:09.654 real 0m38.588s 00:06:09.654 user 1m13.976s 00:06:09.654 sys 0m8.533s 00:06:09.654 09:40:38 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.654 09:40:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.654 ************************************ 00:06:09.654 END TEST event 00:06:09.654 ************************************ 00:06:09.654 09:40:38 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:09.654 09:40:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.654 09:40:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.654 09:40:38 -- common/autotest_common.sh@10 -- # set +x 00:06:09.654 ************************************ 00:06:09.654 START TEST thread 00:06:09.654 ************************************ 00:06:09.654 09:40:38 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:09.654 * Looking for test storage... 00:06:09.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:09.654 09:40:38 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.654 09:40:38 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.654 09:40:38 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.912 09:40:38 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.912 09:40:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.912 09:40:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.912 09:40:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.912 09:40:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.912 09:40:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.912 09:40:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.912 09:40:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.912 09:40:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.912 09:40:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.912 09:40:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.912 09:40:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.912 09:40:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:09.912 09:40:38 thread -- scripts/common.sh@345 -- # : 1 00:06:09.912 09:40:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.912 09:40:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.912 09:40:38 thread -- scripts/common.sh@365 -- # decimal 1 00:06:09.912 09:40:38 thread -- scripts/common.sh@353 -- # local d=1 00:06:09.912 09:40:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.912 09:40:38 thread -- scripts/common.sh@355 -- # echo 1 00:06:09.912 09:40:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.912 09:40:38 thread -- scripts/common.sh@366 -- # decimal 2 00:06:09.912 09:40:38 thread -- scripts/common.sh@353 -- # local d=2 00:06:09.912 09:40:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.912 09:40:38 thread -- scripts/common.sh@355 -- # echo 2 00:06:09.912 09:40:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.912 09:40:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.912 09:40:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.912 09:40:38 thread -- scripts/common.sh@368 -- # return 0 00:06:09.912 09:40:38 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.912 09:40:38 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.912 --rc genhtml_branch_coverage=1 00:06:09.913 --rc genhtml_function_coverage=1 00:06:09.913 --rc genhtml_legend=1 00:06:09.913 --rc geninfo_all_blocks=1 00:06:09.913 --rc geninfo_unexecuted_blocks=1 00:06:09.913 00:06:09.913 ' 00:06:09.913 09:40:38 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.913 --rc genhtml_branch_coverage=1 00:06:09.913 --rc genhtml_function_coverage=1 00:06:09.913 --rc genhtml_legend=1 00:06:09.913 --rc geninfo_all_blocks=1 00:06:09.913 --rc geninfo_unexecuted_blocks=1 00:06:09.913 00:06:09.913 ' 00:06:09.913 09:40:38 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.913 --rc genhtml_branch_coverage=1 00:06:09.913 --rc genhtml_function_coverage=1 00:06:09.913 --rc genhtml_legend=1 00:06:09.913 --rc geninfo_all_blocks=1 00:06:09.913 --rc geninfo_unexecuted_blocks=1 00:06:09.913 00:06:09.913 ' 00:06:09.913 09:40:38 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.913 --rc genhtml_branch_coverage=1 00:06:09.913 --rc genhtml_function_coverage=1 00:06:09.913 --rc genhtml_legend=1 00:06:09.913 --rc geninfo_all_blocks=1 00:06:09.913 --rc geninfo_unexecuted_blocks=1 00:06:09.913 00:06:09.913 ' 00:06:09.913 09:40:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.913 09:40:38 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:09.913 09:40:38 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.913 09:40:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.913 ************************************ 00:06:09.913 START TEST thread_poller_perf 00:06:09.913 ************************************ 00:06:09.913 09:40:38 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.913 [2024-12-07 09:40:38.503180] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:09.913 [2024-12-07 09:40:38.503250] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1052746 ] 00:06:09.913 [2024-12-07 09:40:38.560642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.913 [2024-12-07 09:40:38.600193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.913 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:11.288 [2024-12-07T08:40:40.014Z] ====================================== 00:06:11.288 [2024-12-07T08:40:40.014Z] busy:2307790834 (cyc) 00:06:11.288 [2024-12-07T08:40:40.014Z] total_run_count: 408000 00:06:11.288 [2024-12-07T08:40:40.014Z] tsc_hz: 2300000000 (cyc) 00:06:11.288 [2024-12-07T08:40:40.014Z] ====================================== 00:06:11.288 [2024-12-07T08:40:40.014Z] poller_cost: 5656 (cyc), 2459 (nsec) 00:06:11.288 00:06:11.288 real 0m1.186s 00:06:11.288 user 0m1.108s 00:06:11.289 sys 0m0.074s 00:06:11.289 09:40:39 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.289 09:40:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.289 ************************************ 00:06:11.289 END TEST thread_poller_perf 00:06:11.289 ************************************ 00:06:11.289 09:40:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.289 09:40:39 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:11.289 09:40:39 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.289 09:40:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.289 ************************************ 00:06:11.289 START TEST thread_poller_perf 00:06:11.289 ************************************ 00:06:11.289 09:40:39 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.289 [2024-12-07 09:40:39.746558] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:11.289 [2024-12-07 09:40:39.746607] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1052997 ] 00:06:11.289 [2024-12-07 09:40:39.799309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.289 [2024-12-07 09:40:39.837408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.289 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:12.242 [2024-12-07T08:40:40.968Z] ====================================== 00:06:12.242 [2024-12-07T08:40:40.968Z] busy:2301358662 (cyc) 00:06:12.242 [2024-12-07T08:40:40.968Z] total_run_count: 5047000 00:06:12.242 [2024-12-07T08:40:40.968Z] tsc_hz: 2300000000 (cyc) 00:06:12.242 [2024-12-07T08:40:40.968Z] ====================================== 00:06:12.242 [2024-12-07T08:40:40.968Z] poller_cost: 455 (cyc), 197 (nsec) 00:06:12.242 00:06:12.242 real 0m1.166s 00:06:12.242 user 0m1.093s 00:06:12.242 sys 0m0.069s 00:06:12.242 09:40:40 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.242 09:40:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.242 ************************************ 00:06:12.242 END TEST thread_poller_perf 00:06:12.242 ************************************ 00:06:12.242 09:40:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:12.242 00:06:12.242 real 0m2.629s 00:06:12.242 user 0m2.346s 00:06:12.242 sys 0m0.291s 00:06:12.242 09:40:40 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.242 09:40:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.242 ************************************ 00:06:12.242 END TEST thread 00:06:12.242 ************************************ 00:06:12.242 09:40:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:12.242 09:40:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.242 09:40:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.242 09:40:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.242 09:40:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.501 ************************************ 00:06:12.501 START TEST app_cmdline 00:06:12.501 ************************************ 00:06:12.501 09:40:40 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.501 * Looking for test storage... 00:06:12.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:12.501 09:40:41 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:12.501 09:40:41 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:12.501 09:40:41 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:12.501 09:40:41 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.501 09:40:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:12.501 09:40:41 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.502 09:40:41 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:12.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.502 --rc genhtml_branch_coverage=1 00:06:12.502 --rc genhtml_function_coverage=1 00:06:12.502 --rc genhtml_legend=1 00:06:12.502 --rc geninfo_all_blocks=1 00:06:12.502 --rc geninfo_unexecuted_blocks=1 00:06:12.502 00:06:12.502 ' 00:06:12.502 09:40:41 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:12.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.502 --rc genhtml_branch_coverage=1 00:06:12.502 --rc genhtml_function_coverage=1 00:06:12.502 --rc genhtml_legend=1 00:06:12.502 --rc geninfo_all_blocks=1 00:06:12.502 --rc geninfo_unexecuted_blocks=1 00:06:12.502 00:06:12.502 ' 00:06:12.502 09:40:41 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:12.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.502 --rc genhtml_branch_coverage=1 00:06:12.502 --rc genhtml_function_coverage=1 00:06:12.502 --rc genhtml_legend=1 00:06:12.502 --rc geninfo_all_blocks=1 00:06:12.502 --rc geninfo_unexecuted_blocks=1 00:06:12.502 00:06:12.502 ' 00:06:12.502 09:40:41 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:12.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.502 --rc genhtml_branch_coverage=1 00:06:12.502 --rc genhtml_function_coverage=1 00:06:12.502 --rc genhtml_legend=1 00:06:12.502 --rc geninfo_all_blocks=1 00:06:12.502 --rc geninfo_unexecuted_blocks=1 00:06:12.502 00:06:12.502 ' 00:06:12.502 09:40:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:12.502 09:40:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1053295 00:06:12.502 09:40:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1053295 00:06:12.502 09:40:41 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:12.502 09:40:41 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1053295 ']' 00:06:12.502 09:40:41 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.502 09:40:41 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.502 09:40:41 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.502 09:40:41 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.502 09:40:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.502 [2024-12-07 09:40:41.216670] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:12.502 [2024-12-07 09:40:41.216719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1053295 ] 00:06:12.761 [2024-12-07 09:40:41.270065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.761 [2024-12-07 09:40:41.309409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:13.020 09:40:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:13.020 { 00:06:13.020 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:13.020 "fields": { 00:06:13.020 "major": 24, 00:06:13.020 "minor": 9, 00:06:13.020 "patch": 1, 00:06:13.020 "suffix": "-pre", 00:06:13.020 "commit": "b18e1bd62" 00:06:13.020 } 00:06:13.020 } 00:06:13.020 09:40:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:13.020 09:40:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:13.020 09:40:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:13.020 09:40:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:13.020 09:40:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:13.020 09:40:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:13.020 09:40:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.020 09:40:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:13.020 09:40:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:13.020 09:40:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.020 09:40:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.021 09:40:41 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.021 09:40:41 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:13.021 09:40:41 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.280 request: 00:06:13.280 { 00:06:13.280 "method": "env_dpdk_get_mem_stats", 00:06:13.280 "req_id": 1 00:06:13.280 } 00:06:13.280 Got JSON-RPC error response 00:06:13.280 response: 00:06:13.280 { 00:06:13.280 "code": -32601, 00:06:13.280 "message": "Method not found" 00:06:13.280 } 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.280 09:40:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1053295 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1053295 ']' 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1053295 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1053295 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1053295' 00:06:13.280 killing process with pid 1053295 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@969 -- # kill 1053295 00:06:13.280 09:40:41 app_cmdline -- common/autotest_common.sh@974 -- # wait 1053295 00:06:13.849 00:06:13.849 real 0m1.274s 00:06:13.849 user 0m1.478s 00:06:13.849 sys 0m0.427s 00:06:13.849 09:40:42 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.849 09:40:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.849 ************************************ 00:06:13.849 END TEST app_cmdline 00:06:13.849 ************************************ 00:06:13.849 09:40:42 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:13.849 09:40:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.849 09:40:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.849 09:40:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.849 ************************************ 00:06:13.849 START TEST version 00:06:13.849 ************************************ 00:06:13.849 09:40:42 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:13.849 * Looking for test storage... 00:06:13.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:13.849 09:40:42 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:13.849 09:40:42 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:13.849 09:40:42 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:13.849 09:40:42 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:13.849 09:40:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.849 09:40:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.849 09:40:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.849 09:40:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.849 09:40:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.849 09:40:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.849 09:40:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.850 09:40:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.850 09:40:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.850 09:40:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.850 09:40:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.850 09:40:42 version -- scripts/common.sh@344 -- # case "$op" in 00:06:13.850 09:40:42 version -- scripts/common.sh@345 -- # : 1 00:06:13.850 09:40:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.850 09:40:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.850 09:40:42 version -- scripts/common.sh@365 -- # decimal 1 00:06:13.850 09:40:42 version -- scripts/common.sh@353 -- # local d=1 00:06:13.850 09:40:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.850 09:40:42 version -- scripts/common.sh@355 -- # echo 1 00:06:13.850 09:40:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.850 09:40:42 version -- scripts/common.sh@366 -- # decimal 2 00:06:13.850 09:40:42 version -- scripts/common.sh@353 -- # local d=2 00:06:13.850 09:40:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.850 09:40:42 version -- scripts/common.sh@355 -- # echo 2 00:06:13.850 09:40:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.850 09:40:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.850 09:40:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.850 09:40:42 version -- scripts/common.sh@368 -- # return 0 00:06:13.850 09:40:42 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.850 09:40:42 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:13.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.850 --rc genhtml_branch_coverage=1 00:06:13.850 --rc genhtml_function_coverage=1 00:06:13.850 --rc genhtml_legend=1 00:06:13.850 --rc geninfo_all_blocks=1 00:06:13.850 --rc geninfo_unexecuted_blocks=1 00:06:13.850 00:06:13.850 ' 00:06:13.850 09:40:42 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:13.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.850 --rc genhtml_branch_coverage=1 00:06:13.850 --rc genhtml_function_coverage=1 00:06:13.850 --rc genhtml_legend=1 00:06:13.850 --rc geninfo_all_blocks=1 00:06:13.850 --rc geninfo_unexecuted_blocks=1 00:06:13.850 00:06:13.850 ' 00:06:13.850 09:40:42 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:13.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.850 --rc genhtml_branch_coverage=1 00:06:13.850 --rc genhtml_function_coverage=1 00:06:13.850 --rc genhtml_legend=1 00:06:13.850 --rc geninfo_all_blocks=1 00:06:13.850 --rc geninfo_unexecuted_blocks=1 00:06:13.850 00:06:13.850 ' 00:06:13.850 09:40:42 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:13.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.850 --rc genhtml_branch_coverage=1 00:06:13.850 --rc genhtml_function_coverage=1 00:06:13.850 --rc genhtml_legend=1 00:06:13.850 --rc geninfo_all_blocks=1 00:06:13.850 --rc geninfo_unexecuted_blocks=1 00:06:13.850 00:06:13.850 ' 00:06:13.850 09:40:42 version -- app/version.sh@17 -- # get_header_version major 00:06:13.850 09:40:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:13.850 09:40:42 version -- app/version.sh@14 -- # cut -f2 00:06:13.850 09:40:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:13.850 09:40:42 version -- app/version.sh@17 -- # major=24 00:06:13.850 09:40:42 version -- app/version.sh@18 -- # get_header_version minor 00:06:13.850 09:40:42 version -- app/version.sh@14 -- # cut -f2 00:06:13.850 09:40:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:13.850 09:40:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:13.850 09:40:42 version -- app/version.sh@18 -- # minor=9 00:06:13.850 09:40:42 version -- app/version.sh@19 -- # get_header_version patch 00:06:13.850 09:40:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:13.850 09:40:42 version -- app/version.sh@14 -- # cut -f2 00:06:13.850 09:40:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:13.850 09:40:42 version -- app/version.sh@19 -- # patch=1 00:06:13.850 09:40:42 version -- app/version.sh@20 -- # get_header_version suffix 00:06:13.850 09:40:42 version -- app/version.sh@14 -- # cut -f2 00:06:13.850 09:40:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:13.850 09:40:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:13.850 09:40:42 version -- app/version.sh@20 -- # suffix=-pre 00:06:13.850 09:40:42 version -- app/version.sh@22 -- # version=24.9 00:06:13.850 09:40:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:13.850 09:40:42 version -- app/version.sh@25 -- # version=24.9.1 00:06:13.850 09:40:42 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:13.850 09:40:42 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:13.850 09:40:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:14.110 09:40:42 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:14.110 09:40:42 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:14.110 00:06:14.110 real 0m0.241s 00:06:14.110 user 0m0.140s 00:06:14.110 sys 0m0.132s 00:06:14.110 09:40:42 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.110 09:40:42 version -- common/autotest_common.sh@10 -- # set +x 00:06:14.110 ************************************ 00:06:14.110 END TEST version 00:06:14.110 ************************************ 00:06:14.110 09:40:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:14.110 09:40:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:14.110 09:40:42 -- spdk/autotest.sh@194 -- # uname -s 00:06:14.110 09:40:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:14.110 09:40:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.110 09:40:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.110 09:40:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:14.110 09:40:42 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:14.110 09:40:42 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:14.110 09:40:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.110 09:40:42 -- common/autotest_common.sh@10 -- # set +x 00:06:14.110 09:40:42 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:14.110 09:40:42 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:14.110 09:40:42 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:14.110 09:40:42 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:14.110 09:40:42 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:14.110 09:40:42 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:14.110 09:40:42 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:14.110 09:40:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:14.110 09:40:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.110 09:40:42 -- common/autotest_common.sh@10 -- # set +x 00:06:14.110 ************************************ 00:06:14.110 START TEST nvmf_tcp 00:06:14.110 ************************************ 00:06:14.110 09:40:42 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:14.110 * Looking for test storage... 00:06:14.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:14.110 09:40:42 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.110 09:40:42 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.110 09:40:42 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.110 09:40:42 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:14.110 09:40:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.370 09:40:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:14.370 09:40:42 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.370 09:40:42 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:14.370 09:40:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:14.370 09:40:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.370 09:40:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:14.370 09:40:42 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.370 09:40:42 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.370 09:40:42 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.370 09:40:42 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:14.370 09:40:42 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.370 09:40:42 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.370 --rc genhtml_branch_coverage=1 00:06:14.370 --rc genhtml_function_coverage=1 00:06:14.370 --rc genhtml_legend=1 00:06:14.370 --rc geninfo_all_blocks=1 00:06:14.370 --rc geninfo_unexecuted_blocks=1 00:06:14.370 00:06:14.370 ' 00:06:14.370 09:40:42 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.370 --rc genhtml_branch_coverage=1 00:06:14.370 --rc genhtml_function_coverage=1 00:06:14.370 --rc genhtml_legend=1 00:06:14.370 --rc geninfo_all_blocks=1 00:06:14.370 --rc geninfo_unexecuted_blocks=1 00:06:14.370 00:06:14.370 ' 00:06:14.370 09:40:42 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.370 --rc genhtml_branch_coverage=1 00:06:14.370 --rc genhtml_function_coverage=1 00:06:14.370 --rc genhtml_legend=1 00:06:14.370 --rc geninfo_all_blocks=1 00:06:14.370 --rc geninfo_unexecuted_blocks=1 00:06:14.370 00:06:14.370 ' 00:06:14.370 09:40:42 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.370 --rc genhtml_branch_coverage=1 00:06:14.370 --rc genhtml_function_coverage=1 00:06:14.370 --rc genhtml_legend=1 00:06:14.370 --rc geninfo_all_blocks=1 00:06:14.370 --rc geninfo_unexecuted_blocks=1 00:06:14.370 00:06:14.370 ' 00:06:14.370 09:40:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:14.370 09:40:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:14.370 09:40:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:14.370 09:40:42 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:14.370 09:40:42 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.370 09:40:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.370 ************************************ 00:06:14.370 START TEST nvmf_target_core 00:06:14.370 ************************************ 00:06:14.370 09:40:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:14.370 * Looking for test storage... 00:06:14.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:14.370 09:40:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.370 09:40:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.370 09:40:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.370 09:40:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.370 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.370 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.370 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.370 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.370 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.370 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.371 --rc genhtml_branch_coverage=1 00:06:14.371 --rc genhtml_function_coverage=1 00:06:14.371 --rc genhtml_legend=1 00:06:14.371 --rc geninfo_all_blocks=1 00:06:14.371 --rc geninfo_unexecuted_blocks=1 00:06:14.371 00:06:14.371 ' 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.371 --rc genhtml_branch_coverage=1 00:06:14.371 --rc genhtml_function_coverage=1 00:06:14.371 --rc genhtml_legend=1 00:06:14.371 --rc geninfo_all_blocks=1 00:06:14.371 --rc geninfo_unexecuted_blocks=1 00:06:14.371 00:06:14.371 ' 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.371 --rc genhtml_branch_coverage=1 00:06:14.371 --rc genhtml_function_coverage=1 00:06:14.371 --rc genhtml_legend=1 00:06:14.371 --rc geninfo_all_blocks=1 00:06:14.371 --rc geninfo_unexecuted_blocks=1 00:06:14.371 00:06:14.371 ' 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.371 --rc genhtml_branch_coverage=1 00:06:14.371 --rc genhtml_function_coverage=1 00:06:14.371 --rc genhtml_legend=1 00:06:14.371 --rc geninfo_all_blocks=1 00:06:14.371 --rc geninfo_unexecuted_blocks=1 00:06:14.371 00:06:14.371 ' 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:14.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.371 09:40:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:14.631 ************************************ 00:06:14.631 START TEST nvmf_abort 00:06:14.631 ************************************ 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:14.631 * Looking for test storage... 00:06:14.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.631 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.632 --rc genhtml_branch_coverage=1 00:06:14.632 --rc genhtml_function_coverage=1 00:06:14.632 --rc genhtml_legend=1 00:06:14.632 --rc geninfo_all_blocks=1 00:06:14.632 --rc geninfo_unexecuted_blocks=1 00:06:14.632 00:06:14.632 ' 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.632 --rc genhtml_branch_coverage=1 00:06:14.632 --rc genhtml_function_coverage=1 00:06:14.632 --rc genhtml_legend=1 00:06:14.632 --rc geninfo_all_blocks=1 00:06:14.632 --rc geninfo_unexecuted_blocks=1 00:06:14.632 00:06:14.632 ' 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.632 --rc genhtml_branch_coverage=1 00:06:14.632 --rc genhtml_function_coverage=1 00:06:14.632 --rc genhtml_legend=1 00:06:14.632 --rc geninfo_all_blocks=1 00:06:14.632 --rc geninfo_unexecuted_blocks=1 00:06:14.632 00:06:14.632 ' 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.632 --rc genhtml_branch_coverage=1 00:06:14.632 --rc genhtml_function_coverage=1 00:06:14.632 --rc genhtml_legend=1 00:06:14.632 --rc geninfo_all_blocks=1 00:06:14.632 --rc geninfo_unexecuted_blocks=1 00:06:14.632 00:06:14.632 ' 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:14.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.632 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.633 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.633 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:14.633 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:14.633 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:14.633 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:19.915 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:19.915 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:19.915 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:19.916 Found net devices under 0000:86:00.0: cvl_0_0 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:19.916 Found net devices under 0000:86:00.1: cvl_0_1 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:19.916 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:20.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:20.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:06:20.177 00:06:20.177 --- 10.0.0.2 ping statistics --- 00:06:20.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.177 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:20.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:20.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:06:20.177 00:06:20.177 --- 10.0.0.1 ping statistics --- 00:06:20.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.177 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=1056756 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 1056756 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1056756 ']' 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.177 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.437 [2024-12-07 09:40:48.926424] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:20.437 [2024-12-07 09:40:48.926472] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.437 [2024-12-07 09:40:48.981403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.437 [2024-12-07 09:40:49.026637] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:20.437 [2024-12-07 09:40:49.026675] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:20.437 [2024-12-07 09:40:49.026683] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.437 [2024-12-07 09:40:49.026689] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.437 [2024-12-07 09:40:49.026694] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:20.437 [2024-12-07 09:40:49.026795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.437 [2024-12-07 09:40:49.026814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.437 [2024-12-07 09:40:49.026816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.437 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.437 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:20.437 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:20.437 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:20.437 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.437 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:20.437 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:20.437 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.437 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.437 [2024-12-07 09:40:49.158730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.696 Malloc0 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.696 Delay0 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.696 [2024-12-07 09:40:49.234625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.696 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:20.696 [2024-12-07 09:40:49.383101] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:23.229 Initializing NVMe Controllers 00:06:23.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:23.229 controller IO queue size 128 less than required 00:06:23.229 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:23.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:23.229 Initialization complete. Launching workers. 00:06:23.229 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36594 00:06:23.229 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36655, failed to submit 62 00:06:23.229 success 36598, unsuccessful 57, failed 0 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:23.230 rmmod nvme_tcp 00:06:23.230 rmmod nvme_fabrics 00:06:23.230 rmmod nvme_keyring 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 1056756 ']' 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 1056756 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1056756 ']' 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1056756 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1056756 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1056756' 00:06:23.230 killing process with pid 1056756 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1056756 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1056756 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.230 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.140 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:25.140 00:06:25.140 real 0m10.701s 00:06:25.140 user 0m11.315s 00:06:25.140 sys 0m5.112s 00:06:25.140 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.140 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.140 ************************************ 00:06:25.140 END TEST nvmf_abort 00:06:25.140 ************************************ 00:06:25.140 09:40:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:25.140 09:40:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:25.140 09:40:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.140 09:40:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:25.400 ************************************ 00:06:25.400 START TEST nvmf_ns_hotplug_stress 00:06:25.400 ************************************ 00:06:25.400 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:25.400 * Looking for test storage... 00:06:25.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.400 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:25.400 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:25.400 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:25.400 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:25.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.401 --rc genhtml_branch_coverage=1 00:06:25.401 --rc genhtml_function_coverage=1 00:06:25.401 --rc genhtml_legend=1 00:06:25.401 --rc geninfo_all_blocks=1 00:06:25.401 --rc geninfo_unexecuted_blocks=1 00:06:25.401 00:06:25.401 ' 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:25.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.401 --rc genhtml_branch_coverage=1 00:06:25.401 --rc genhtml_function_coverage=1 00:06:25.401 --rc genhtml_legend=1 00:06:25.401 --rc geninfo_all_blocks=1 00:06:25.401 --rc geninfo_unexecuted_blocks=1 00:06:25.401 00:06:25.401 ' 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:25.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.401 --rc genhtml_branch_coverage=1 00:06:25.401 --rc genhtml_function_coverage=1 00:06:25.401 --rc genhtml_legend=1 00:06:25.401 --rc geninfo_all_blocks=1 00:06:25.401 --rc geninfo_unexecuted_blocks=1 00:06:25.401 00:06:25.401 ' 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:25.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.401 --rc genhtml_branch_coverage=1 00:06:25.401 --rc genhtml_function_coverage=1 00:06:25.401 --rc genhtml_legend=1 00:06:25.401 --rc geninfo_all_blocks=1 00:06:25.401 --rc geninfo_unexecuted_blocks=1 00:06:25.401 00:06:25.401 ' 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:25.402 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:30.671 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:30.671 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:30.671 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:30.671 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:30.671 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:30.671 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:30.671 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:30.930 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:30.930 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:30.930 Found net devices under 0000:86:00.0: cvl_0_0 00:06:30.930 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:30.931 Found net devices under 0000:86:00.1: cvl_0_1 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:30.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:06:30.931 00:06:30.931 --- 10.0.0.2 ping statistics --- 00:06:30.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.931 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:30.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:06:30.931 00:06:30.931 --- 10.0.0.1 ping statistics --- 00:06:30.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.931 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:30.931 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:31.189 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:31.190 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:31.190 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.190 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.190 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=1060769 00:06:31.190 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:31.190 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 1060769 00:06:31.190 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1060769 ']' 00:06:31.190 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.190 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.190 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.190 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.190 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.190 [2024-12-07 09:40:59.745555] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:31.190 [2024-12-07 09:40:59.745608] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.190 [2024-12-07 09:40:59.806236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.190 [2024-12-07 09:40:59.848173] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.190 [2024-12-07 09:40:59.848212] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.190 [2024-12-07 09:40:59.848219] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.190 [2024-12-07 09:40:59.848225] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.190 [2024-12-07 09:40:59.848230] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.190 [2024-12-07 09:40:59.848280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.190 [2024-12-07 09:40:59.848370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.190 [2024-12-07 09:40:59.848371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.448 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.448 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:31.448 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:31.448 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.448 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.449 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.449 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:31.449 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:31.449 [2024-12-07 09:41:00.164000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.707 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:31.707 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:31.966 [2024-12-07 09:41:00.590830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.966 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:32.225 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:32.484 Malloc0 00:06:32.484 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:32.484 Delay0 00:06:32.744 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.744 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:33.003 NULL1 00:06:33.003 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:33.262 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1061258 00:06:33.262 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:33.262 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:33.262 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.521 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.521 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:33.521 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:33.780 true 00:06:33.780 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:33.780 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.038 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.297 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:34.297 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:34.297 true 00:06:34.556 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:34.556 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.494 Read completed with error (sct=0, sc=11) 00:06:35.494 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.754 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:35.754 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:36.013 true 00:06:36.013 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:36.013 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.948 09:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.948 09:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:36.948 09:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:37.206 true 00:06:37.206 09:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:37.206 09:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.465 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.723 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:37.723 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:37.723 true 00:06:37.982 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:37.982 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.920 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.179 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:39.179 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:39.438 true 00:06:39.438 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:39.438 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.377 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.377 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:40.377 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:40.636 true 00:06:40.636 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:40.636 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.895 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.895 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:40.895 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:41.154 true 00:06:41.154 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:41.154 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.532 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.532 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:42.532 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:42.791 true 00:06:42.791 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:42.791 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.728 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.728 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:43.728 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:43.987 true 00:06:43.987 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:43.987 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.247 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.247 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:44.247 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:44.506 true 00:06:44.506 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:44.506 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.885 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.885 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:45.885 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:46.144 true 00:06:46.144 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:46.144 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.079 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.079 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:47.079 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:47.337 true 00:06:47.337 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:47.337 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.594 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.594 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:47.594 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:47.852 true 00:06:47.852 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:47.852 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.044 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.044 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:49.044 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:49.301 true 00:06:49.301 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:49.302 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.235 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.494 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:50.494 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:50.494 true 00:06:50.494 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:50.494 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.753 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.011 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:51.011 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:51.268 true 00:06:51.268 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:51.268 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.203 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.461 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:52.461 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:52.719 true 00:06:52.719 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:52.719 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.656 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.656 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:53.656 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:53.915 true 00:06:53.915 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:53.915 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.174 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.433 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:54.433 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:54.433 true 00:06:54.433 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:54.433 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.811 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.811 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:55.811 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:56.070 true 00:06:56.070 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:56.070 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.007 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.007 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:57.007 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:57.266 true 00:06:57.266 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:57.266 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.525 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.525 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:57.525 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:57.784 true 00:06:57.784 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:57.784 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.163 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.163 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:59.163 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:59.422 true 00:06:59.422 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:59.422 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.680 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.680 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:59.680 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:59.939 true 00:06:59.939 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:06:59.939 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.320 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.321 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:01.321 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:01.580 true 00:07:01.580 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:07:01.580 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.517 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.517 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:02.517 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:02.777 true 00:07:02.777 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:07:02.777 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.036 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.036 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:03.036 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:03.296 true 00:07:03.296 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:07:03.296 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.235 Initializing NVMe Controllers 00:07:04.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:04.235 Controller IO queue size 128, less than required. 00:07:04.235 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.235 Controller IO queue size 128, less than required. 00:07:04.235 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:04.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:04.235 Initialization complete. Launching workers. 00:07:04.235 ======================================================== 00:07:04.235 Latency(us) 00:07:04.235 Device Information : IOPS MiB/s Average min max 00:07:04.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2024.07 0.99 43661.50 1931.08 1083978.06 00:07:04.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16938.13 8.27 7537.05 1320.00 380560.02 00:07:04.235 ======================================================== 00:07:04.235 Total : 18962.20 9.26 11393.05 1320.00 1083978.06 00:07:04.235 00:07:04.235 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.495 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:04.495 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:04.755 true 00:07:04.755 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1061258 00:07:04.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1061258) - No such process 00:07:04.755 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1061258 00:07:04.755 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.014 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.272 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:05.272 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:05.272 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:05.272 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.272 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:05.272 null0 00:07:05.272 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.272 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.272 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:05.531 null1 00:07:05.531 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.531 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.531 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:05.793 null2 00:07:05.793 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.793 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.793 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:05.793 null3 00:07:06.052 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.052 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.052 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:06.052 null4 00:07:06.052 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.052 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.052 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:06.312 null5 00:07:06.312 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.312 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.312 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:06.574 null6 00:07:06.574 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.574 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.574 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:06.834 null7 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1066870 1066871 1066873 1066875 1066877 1066879 1066881 1066883 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.834 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.093 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.094 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.352 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.352 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.352 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.352 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.352 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.352 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.352 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.352 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.610 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.869 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.869 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.869 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.869 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.869 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.869 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.869 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.869 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.128 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.129 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.129 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.129 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.129 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.129 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.129 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.129 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.129 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.129 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.388 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.648 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.648 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.648 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.648 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.648 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.648 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.648 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.648 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.908 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.908 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.908 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.908 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.908 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.908 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.908 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.908 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.908 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.908 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.908 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.909 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.168 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.168 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.168 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.168 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.168 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.169 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.428 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.428 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.428 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.428 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.428 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.428 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.428 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.428 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.688 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.947 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.947 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.947 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.947 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.947 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.947 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.947 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.947 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.947 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.947 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.947 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.207 09:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.466 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.725 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.725 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.725 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.725 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.725 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.725 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.725 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.725 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:10.984 rmmod nvme_tcp 00:07:10.984 rmmod nvme_fabrics 00:07:10.984 rmmod nvme_keyring 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 1060769 ']' 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 1060769 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1060769 ']' 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1060769 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1060769 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1060769' 00:07:10.984 killing process with pid 1060769 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1060769 00:07:10.984 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1060769 00:07:11.242 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:11.242 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:11.242 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:11.242 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:11.242 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:07:11.242 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:11.242 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:07:11.242 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:11.242 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:11.242 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.242 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.242 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.318 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:13.318 00:07:13.318 real 0m48.058s 00:07:13.318 user 3m17.998s 00:07:13.318 sys 0m16.248s 00:07:13.318 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.318 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.318 ************************************ 00:07:13.318 END TEST nvmf_ns_hotplug_stress 00:07:13.318 ************************************ 00:07:13.318 09:41:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:13.318 09:41:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.318 09:41:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.318 09:41:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.318 ************************************ 00:07:13.318 START TEST nvmf_delete_subsystem 00:07:13.318 ************************************ 00:07:13.318 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:13.585 * Looking for test storage... 00:07:13.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.585 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:13.585 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:13.585 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:07:13.585 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:13.585 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.585 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.585 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.585 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.585 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.585 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.585 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:13.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.586 --rc genhtml_branch_coverage=1 00:07:13.586 --rc genhtml_function_coverage=1 00:07:13.586 --rc genhtml_legend=1 00:07:13.586 --rc geninfo_all_blocks=1 00:07:13.586 --rc geninfo_unexecuted_blocks=1 00:07:13.586 00:07:13.586 ' 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:13.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.586 --rc genhtml_branch_coverage=1 00:07:13.586 --rc genhtml_function_coverage=1 00:07:13.586 --rc genhtml_legend=1 00:07:13.586 --rc geninfo_all_blocks=1 00:07:13.586 --rc geninfo_unexecuted_blocks=1 00:07:13.586 00:07:13.586 ' 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:13.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.586 --rc genhtml_branch_coverage=1 00:07:13.586 --rc genhtml_function_coverage=1 00:07:13.586 --rc genhtml_legend=1 00:07:13.586 --rc geninfo_all_blocks=1 00:07:13.586 --rc geninfo_unexecuted_blocks=1 00:07:13.586 00:07:13.586 ' 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:13.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.586 --rc genhtml_branch_coverage=1 00:07:13.586 --rc genhtml_function_coverage=1 00:07:13.586 --rc genhtml_legend=1 00:07:13.586 --rc geninfo_all_blocks=1 00:07:13.586 --rc geninfo_unexecuted_blocks=1 00:07:13.586 00:07:13.586 ' 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.586 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.587 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:13.587 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:13.587 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:13.587 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:18.862 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:18.862 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:18.862 Found net devices under 0000:86:00.0: cvl_0_0 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:18.862 Found net devices under 0000:86:00.1: cvl_0_1 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:18.862 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:18.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:07:18.863 00:07:18.863 --- 10.0.0.2 ping statistics --- 00:07:18.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.863 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:07:18.863 00:07:18.863 --- 10.0.0.1 ping statistics --- 00:07:18.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.863 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=1071269 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 1071269 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1071269 ']' 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.863 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:18.863 [2024-12-07 09:41:47.420151] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:18.863 [2024-12-07 09:41:47.420196] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.863 [2024-12-07 09:41:47.477960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.863 [2024-12-07 09:41:47.518334] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.863 [2024-12-07 09:41:47.518376] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.863 [2024-12-07 09:41:47.518384] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.863 [2024-12-07 09:41:47.518390] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.863 [2024-12-07 09:41:47.518395] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.863 [2024-12-07 09:41:47.518435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.863 [2024-12-07 09:41:47.518438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.122 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.122 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:19.122 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:19.122 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:19.122 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.122 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.122 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:19.122 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.122 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.122 [2024-12-07 09:41:47.649484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.122 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.122 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.123 [2024-12-07 09:41:47.665670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.123 NULL1 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.123 Delay0 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1071289 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:19.123 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:19.123 [2024-12-07 09:41:47.740301] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:21.029 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.029 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.029 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 [2024-12-07 09:41:49.910077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8338000c00 is same with the state(6) to be set 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Write completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 Read completed with error (sct=0, sc=8) 00:07:21.289 starting I/O failed: -6 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 starting I/O failed: -6 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 starting I/O failed: -6 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 starting I/O failed: -6 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 starting I/O failed: -6 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 starting I/O failed: -6 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Write completed with error (sct=0, sc=8) 00:07:21.290 Read completed with error (sct=0, sc=8) 00:07:21.290 starting I/O failed: -6 00:07:21.290 starting I/O failed: -6 00:07:22.228 [2024-12-07 09:41:50.876396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1562a80 is same with the state(6) to be set 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 [2024-12-07 09:41:50.912141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155fe80 is same with the state(6) to be set 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Read completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.228 Write completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 [2024-12-07 09:41:50.912287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f833800d310 is same with the state(6) to be set 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 [2024-12-07 09:41:50.912473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155f320 is same with the state(6) to be set 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Read completed with error (sct=0, sc=8) 00:07:22.229 Write completed with error (sct=0, sc=8) 00:07:22.229 [2024-12-07 09:41:50.913409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155f820 is same with the state(6) to be set 00:07:22.229 Initializing NVMe Controllers 00:07:22.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:22.229 Controller IO queue size 128, less than required. 00:07:22.229 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:22.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:22.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:22.229 Initialization complete. Launching workers. 00:07:22.229 ======================================================== 00:07:22.229 Latency(us) 00:07:22.229 Device Information : IOPS MiB/s Average min max 00:07:22.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 185.67 0.09 985762.46 590.31 2002010.03 00:07:22.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.40 0.08 899383.05 435.67 2001816.37 00:07:22.229 ======================================================== 00:07:22.229 Total : 340.07 0.17 946544.95 435.67 2002010.03 00:07:22.229 00:07:22.229 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.229 [2024-12-07 09:41:50.914070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1562a80 (9): Bad file descriptor 00:07:22.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:22.229 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:22.229 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1071289 00:07:22.229 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1071289 00:07:22.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1071289) - No such process 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1071289 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1071289 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1071289 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.797 [2024-12-07 09:41:51.442616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1071981 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071981 00:07:22.797 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:22.798 [2024-12-07 09:41:51.504182] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:23.366 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.366 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071981 00:07:23.366 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:23.934 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.934 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071981 00:07:23.934 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.502 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.502 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071981 00:07:24.502 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.761 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.761 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071981 00:07:24.761 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.329 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.329 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071981 00:07:25.329 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.898 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.898 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071981 00:07:25.898 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.157 Initializing NVMe Controllers 00:07:26.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:26.157 Controller IO queue size 128, less than required. 00:07:26.157 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:26.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:26.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:26.157 Initialization complete. Launching workers. 00:07:26.157 ======================================================== 00:07:26.157 Latency(us) 00:07:26.157 Device Information : IOPS MiB/s Average min max 00:07:26.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003270.99 1000192.29 1011040.43 00:07:26.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005846.42 1000155.17 1043058.70 00:07:26.157 ======================================================== 00:07:26.157 Total : 256.00 0.12 1004558.70 1000155.17 1043058.70 00:07:26.157 00:07:26.418 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.418 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071981 00:07:26.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1071981) - No such process 00:07:26.418 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1071981 00:07:26.418 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:26.418 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:26.418 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:26.418 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:26.418 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:26.418 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:26.418 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.418 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:26.418 rmmod nvme_tcp 00:07:26.418 rmmod nvme_fabrics 00:07:26.418 rmmod nvme_keyring 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 1071269 ']' 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 1071269 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1071269 ']' 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1071269 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1071269 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1071269' 00:07:26.418 killing process with pid 1071269 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1071269 00:07:26.418 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1071269 00:07:26.676 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:26.676 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:26.676 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:26.676 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:26.676 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:07:26.676 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:26.676 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:07:26.676 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:26.676 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:26.676 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.676 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.676 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:29.212 00:07:29.212 real 0m15.384s 00:07:29.212 user 0m28.870s 00:07:29.212 sys 0m4.882s 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.212 ************************************ 00:07:29.212 END TEST nvmf_delete_subsystem 00:07:29.212 ************************************ 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.212 ************************************ 00:07:29.212 START TEST nvmf_host_management 00:07:29.212 ************************************ 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:29.212 * Looking for test storage... 00:07:29.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:29.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.212 --rc genhtml_branch_coverage=1 00:07:29.212 --rc genhtml_function_coverage=1 00:07:29.212 --rc genhtml_legend=1 00:07:29.212 --rc geninfo_all_blocks=1 00:07:29.212 --rc geninfo_unexecuted_blocks=1 00:07:29.212 00:07:29.212 ' 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:29.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.212 --rc genhtml_branch_coverage=1 00:07:29.212 --rc genhtml_function_coverage=1 00:07:29.212 --rc genhtml_legend=1 00:07:29.212 --rc geninfo_all_blocks=1 00:07:29.212 --rc geninfo_unexecuted_blocks=1 00:07:29.212 00:07:29.212 ' 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:29.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.212 --rc genhtml_branch_coverage=1 00:07:29.212 --rc genhtml_function_coverage=1 00:07:29.212 --rc genhtml_legend=1 00:07:29.212 --rc geninfo_all_blocks=1 00:07:29.212 --rc geninfo_unexecuted_blocks=1 00:07:29.212 00:07:29.212 ' 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:29.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.212 --rc genhtml_branch_coverage=1 00:07:29.212 --rc genhtml_function_coverage=1 00:07:29.212 --rc genhtml_legend=1 00:07:29.212 --rc geninfo_all_blocks=1 00:07:29.212 --rc geninfo_unexecuted_blocks=1 00:07:29.212 00:07:29.212 ' 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.212 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.213 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:34.482 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:34.483 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:34.483 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:34.483 Found net devices under 0000:86:00.0: cvl_0_0 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:34.483 Found net devices under 0000:86:00.1: cvl_0_1 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.483 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:34.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:07:34.483 00:07:34.483 --- 10.0.0.2 ping statistics --- 00:07:34.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.483 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:07:34.483 00:07:34.483 --- 10.0.0.1 ping statistics --- 00:07:34.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.483 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=1076113 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 1076113 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1076113 ']' 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.483 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.742 [2024-12-07 09:42:03.209496] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:34.742 [2024-12-07 09:42:03.209538] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.742 [2024-12-07 09:42:03.266501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.742 [2024-12-07 09:42:03.309689] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.742 [2024-12-07 09:42:03.309729] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.742 [2024-12-07 09:42:03.309736] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.742 [2024-12-07 09:42:03.309742] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.742 [2024-12-07 09:42:03.309748] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.742 [2024-12-07 09:42:03.309789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.742 [2024-12-07 09:42:03.309877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.742 [2024-12-07 09:42:03.309994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.742 [2024-12-07 09:42:03.309994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.742 [2024-12-07 09:42:03.455138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:34.742 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.000 Malloc0 00:07:35.000 [2024-12-07 09:42:03.517516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1076315 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1076315 /var/tmp/bdevperf.sock 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1076315 ']' 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:35.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:35.000 { 00:07:35.000 "params": { 00:07:35.000 "name": "Nvme$subsystem", 00:07:35.000 "trtype": "$TEST_TRANSPORT", 00:07:35.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.000 "adrfam": "ipv4", 00:07:35.000 "trsvcid": "$NVMF_PORT", 00:07:35.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.000 "hdgst": ${hdgst:-false}, 00:07:35.000 "ddgst": ${ddgst:-false} 00:07:35.000 }, 00:07:35.000 "method": "bdev_nvme_attach_controller" 00:07:35.000 } 00:07:35.000 EOF 00:07:35.000 )") 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:35.000 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:35.000 "params": { 00:07:35.000 "name": "Nvme0", 00:07:35.000 "trtype": "tcp", 00:07:35.000 "traddr": "10.0.0.2", 00:07:35.000 "adrfam": "ipv4", 00:07:35.000 "trsvcid": "4420", 00:07:35.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:35.000 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:35.000 "hdgst": false, 00:07:35.000 "ddgst": false 00:07:35.000 }, 00:07:35.000 "method": "bdev_nvme_attach_controller" 00:07:35.000 }' 00:07:35.000 [2024-12-07 09:42:03.615864] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.000 [2024-12-07 09:42:03.615911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076315 ] 00:07:35.000 [2024-12-07 09:42:03.672206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.001 [2024-12-07 09:42:03.712127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.568 Running I/O for 10 seconds... 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:35.568 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.828 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.828 [2024-12-07 09:42:04.384421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.828 [2024-12-07 09:42:04.384475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6920 is same with the state(6) to be set 00:07:35.829 [2024-12-07 09:42:04.384991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.829 [2024-12-07 09:42:04.385432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.829 [2024-12-07 09:42:04.385440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.385991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.385999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.386006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.386016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.830 [2024-12-07 09:42:04.386022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.830 [2024-12-07 09:42:04.386031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98cb20 is same with the state(6) to be set 00:07:35.830 [2024-12-07 09:42:04.386082] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x98cb20 was disconnected and freed. reset controller. 00:07:35.830 [2024-12-07 09:42:04.387032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:35.830 task offset: 90112 on job bdev=Nvme0n1 fails 00:07:35.830 00:07:35.830 Latency(us) 00:07:35.830 [2024-12-07T08:42:04.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.830 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:35.830 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:35.830 Verification LBA range: start 0x0 length 0x400 00:07:35.830 Nvme0n1 : 0.40 1757.43 109.84 159.77 0.00 32485.34 6012.22 27810.06 00:07:35.830 [2024-12-07T08:42:04.556Z] =================================================================================================================== 00:07:35.830 [2024-12-07T08:42:04.556Z] Total : 1757.43 109.84 159.77 0.00 32485.34 6012.22 27810.06 00:07:35.830 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.830 [2024-12-07 09:42:04.389456] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.830 [2024-12-07 09:42:04.389480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x990800 (9): Bad file descriptor 00:07:35.830 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:35.830 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.830 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.830 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.830 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:35.830 [2024-12-07 09:42:04.402547] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1076315 00:07:36.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1076315) - No such process 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:36.764 { 00:07:36.764 "params": { 00:07:36.764 "name": "Nvme$subsystem", 00:07:36.764 "trtype": "$TEST_TRANSPORT", 00:07:36.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.764 "adrfam": "ipv4", 00:07:36.764 "trsvcid": "$NVMF_PORT", 00:07:36.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.764 "hdgst": ${hdgst:-false}, 00:07:36.764 "ddgst": ${ddgst:-false} 00:07:36.764 }, 00:07:36.764 "method": "bdev_nvme_attach_controller" 00:07:36.764 } 00:07:36.764 EOF 00:07:36.764 )") 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:36.764 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:36.764 "params": { 00:07:36.764 "name": "Nvme0", 00:07:36.764 "trtype": "tcp", 00:07:36.764 "traddr": "10.0.0.2", 00:07:36.764 "adrfam": "ipv4", 00:07:36.764 "trsvcid": "4420", 00:07:36.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:36.764 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:36.764 "hdgst": false, 00:07:36.764 "ddgst": false 00:07:36.764 }, 00:07:36.764 "method": "bdev_nvme_attach_controller" 00:07:36.764 }' 00:07:36.764 [2024-12-07 09:42:05.455501] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.764 [2024-12-07 09:42:05.455549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076622 ] 00:07:37.022 [2024-12-07 09:42:05.510461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.022 [2024-12-07 09:42:05.549025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.022 Running I/O for 1 seconds... 00:07:38.400 1856.00 IOPS, 116.00 MiB/s 00:07:38.400 Latency(us) 00:07:38.400 [2024-12-07T08:42:07.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.400 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:38.400 Verification LBA range: start 0x0 length 0x400 00:07:38.400 Nvme0n1 : 1.00 1912.50 119.53 0.00 0.00 32943.99 7265.95 27924.03 00:07:38.400 [2024-12-07T08:42:07.126Z] =================================================================================================================== 00:07:38.400 [2024-12-07T08:42:07.126Z] Total : 1912.50 119.53 0.00 0.00 32943.99 7265.95 27924.03 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.400 rmmod nvme_tcp 00:07:38.400 rmmod nvme_fabrics 00:07:38.400 rmmod nvme_keyring 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 1076113 ']' 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 1076113 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1076113 ']' 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1076113 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.400 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1076113 00:07:38.400 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:38.400 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:38.400 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1076113' 00:07:38.400 killing process with pid 1076113 00:07:38.400 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1076113 00:07:38.400 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1076113 00:07:38.660 [2024-12-07 09:42:07.190246] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:38.660 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:38.660 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:38.660 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:38.660 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:38.660 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:07:38.660 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:07:38.660 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:38.660 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:38.660 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:38.660 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.660 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.660 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.569 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:40.569 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:40.828 00:07:40.828 real 0m11.840s 00:07:40.828 user 0m19.218s 00:07:40.828 sys 0m5.242s 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.828 ************************************ 00:07:40.828 END TEST nvmf_host_management 00:07:40.828 ************************************ 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.828 ************************************ 00:07:40.828 START TEST nvmf_lvol 00:07:40.828 ************************************ 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:40.828 * Looking for test storage... 00:07:40.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:40.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.828 --rc genhtml_branch_coverage=1 00:07:40.828 --rc genhtml_function_coverage=1 00:07:40.828 --rc genhtml_legend=1 00:07:40.828 --rc geninfo_all_blocks=1 00:07:40.828 --rc geninfo_unexecuted_blocks=1 00:07:40.828 00:07:40.828 ' 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:40.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.828 --rc genhtml_branch_coverage=1 00:07:40.828 --rc genhtml_function_coverage=1 00:07:40.828 --rc genhtml_legend=1 00:07:40.828 --rc geninfo_all_blocks=1 00:07:40.828 --rc geninfo_unexecuted_blocks=1 00:07:40.828 00:07:40.828 ' 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:40.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.828 --rc genhtml_branch_coverage=1 00:07:40.828 --rc genhtml_function_coverage=1 00:07:40.828 --rc genhtml_legend=1 00:07:40.828 --rc geninfo_all_blocks=1 00:07:40.828 --rc geninfo_unexecuted_blocks=1 00:07:40.828 00:07:40.828 ' 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:40.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.828 --rc genhtml_branch_coverage=1 00:07:40.828 --rc genhtml_function_coverage=1 00:07:40.828 --rc genhtml_legend=1 00:07:40.828 --rc geninfo_all_blocks=1 00:07:40.828 --rc geninfo_unexecuted_blocks=1 00:07:40.828 00:07:40.828 ' 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.828 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.088 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:46.364 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:46.364 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:46.364 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:46.365 Found net devices under 0000:86:00.0: cvl_0_0 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:46.365 Found net devices under 0000:86:00.1: cvl_0_1 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:46.365 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.365 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.365 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.365 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:46.623 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:46.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:07:46.623 00:07:46.623 --- 10.0.0.2 ping statistics --- 00:07:46.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.623 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:07:46.623 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:07:46.623 00:07:46.623 --- 10.0.0.1 ping statistics --- 00:07:46.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.623 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=1080787 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 1080787 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1080787 ']' 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.624 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:46.624 [2024-12-07 09:42:15.194026] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:46.624 [2024-12-07 09:42:15.194077] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.624 [2024-12-07 09:42:15.272482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.624 [2024-12-07 09:42:15.313993] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.624 [2024-12-07 09:42:15.314033] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.624 [2024-12-07 09:42:15.314040] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.624 [2024-12-07 09:42:15.314047] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.624 [2024-12-07 09:42:15.314052] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.624 [2024-12-07 09:42:15.314099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.624 [2024-12-07 09:42:15.314198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.624 [2024-12-07 09:42:15.314199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.883 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.883 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:46.883 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:46.883 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:46.883 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:46.883 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.883 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:47.141 [2024-12-07 09:42:15.619024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.141 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:47.141 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:47.141 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:47.400 09:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:47.400 09:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:47.666 09:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:47.927 09:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b5170dcb-45d1-4b5c-ade6-5a5d7f51c0fb 00:07:47.927 09:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b5170dcb-45d1-4b5c-ade6-5a5d7f51c0fb lvol 20 00:07:48.185 09:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=48302e29-beef-4d5a-bd59-7f8e2a826702 00:07:48.185 09:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.185 09:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 48302e29-beef-4d5a-bd59-7f8e2a826702 00:07:48.444 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.702 [2024-12-07 09:42:17.239381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.702 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.960 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1081274 00:07:48.960 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:48.961 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:49.947 09:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 48302e29-beef-4d5a-bd59-7f8e2a826702 MY_SNAPSHOT 00:07:50.205 09:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5e87f37f-d4ee-4e3f-8c01-22fa2bd10f27 00:07:50.205 09:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 48302e29-beef-4d5a-bd59-7f8e2a826702 30 00:07:50.464 09:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5e87f37f-d4ee-4e3f-8c01-22fa2bd10f27 MY_CLONE 00:07:50.738 09:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=48ee4a4a-c7ae-4baf-96a2-05236138b43b 00:07:50.738 09:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 48ee4a4a-c7ae-4baf-96a2-05236138b43b 00:07:51.303 09:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1081274 00:07:59.424 Initializing NVMe Controllers 00:07:59.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:59.424 Controller IO queue size 128, less than required. 00:07:59.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:59.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:59.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:59.424 Initialization complete. Launching workers. 00:07:59.424 ======================================================== 00:07:59.424 Latency(us) 00:07:59.424 Device Information : IOPS MiB/s Average min max 00:07:59.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11991.29 46.84 10673.38 1596.98 103949.60 00:07:59.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11902.69 46.49 10753.32 3472.37 42119.35 00:07:59.424 ======================================================== 00:07:59.424 Total : 23893.98 93.34 10713.20 1596.98 103949.60 00:07:59.424 00:07:59.424 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:59.424 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 48302e29-beef-4d5a-bd59-7f8e2a826702 00:07:59.683 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5170dcb-45d1-4b5c-ade6-5a5d7f51c0fb 00:07:59.683 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:59.683 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:59.683 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:59.683 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:59.683 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:59.683 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.683 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:59.683 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.683 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.942 rmmod nvme_tcp 00:07:59.942 rmmod nvme_fabrics 00:07:59.942 rmmod nvme_keyring 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 1080787 ']' 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 1080787 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1080787 ']' 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1080787 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1080787 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1080787' 00:07:59.942 killing process with pid 1080787 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1080787 00:07:59.942 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1080787 00:08:00.202 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:00.202 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:00.202 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:00.202 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:00.202 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:08:00.202 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:00.202 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:08:00.202 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:00.202 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:00.202 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.202 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.202 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.739 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.739 00:08:02.739 real 0m21.475s 00:08:02.739 user 1m2.572s 00:08:02.739 sys 0m7.532s 00:08:02.739 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.739 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.739 ************************************ 00:08:02.739 END TEST nvmf_lvol 00:08:02.739 ************************************ 00:08:02.739 09:42:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:02.739 09:42:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:02.739 09:42:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.739 09:42:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.739 ************************************ 00:08:02.739 START TEST nvmf_lvs_grow 00:08:02.739 ************************************ 00:08:02.739 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:02.739 * Looking for test storage... 00:08:02.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.739 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:02.739 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:02.739 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.739 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:02.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.740 --rc genhtml_branch_coverage=1 00:08:02.740 --rc genhtml_function_coverage=1 00:08:02.740 --rc genhtml_legend=1 00:08:02.740 --rc geninfo_all_blocks=1 00:08:02.740 --rc geninfo_unexecuted_blocks=1 00:08:02.740 00:08:02.740 ' 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:02.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.740 --rc genhtml_branch_coverage=1 00:08:02.740 --rc genhtml_function_coverage=1 00:08:02.740 --rc genhtml_legend=1 00:08:02.740 --rc geninfo_all_blocks=1 00:08:02.740 --rc geninfo_unexecuted_blocks=1 00:08:02.740 00:08:02.740 ' 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:02.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.740 --rc genhtml_branch_coverage=1 00:08:02.740 --rc genhtml_function_coverage=1 00:08:02.740 --rc genhtml_legend=1 00:08:02.740 --rc geninfo_all_blocks=1 00:08:02.740 --rc geninfo_unexecuted_blocks=1 00:08:02.740 00:08:02.740 ' 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:02.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.740 --rc genhtml_branch_coverage=1 00:08:02.740 --rc genhtml_function_coverage=1 00:08:02.740 --rc genhtml_legend=1 00:08:02.740 --rc geninfo_all_blocks=1 00:08:02.740 --rc geninfo_unexecuted_blocks=1 00:08:02.740 00:08:02.740 ' 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.740 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:08.015 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:08.015 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:08.015 Found net devices under 0000:86:00.0: cvl_0_0 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:08.015 Found net devices under 0000:86:00.1: cvl_0_1 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.015 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:08:08.015 00:08:08.015 --- 10.0.0.2 ping statistics --- 00:08:08.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.015 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:08:08.015 00:08:08.015 --- 10.0.0.1 ping statistics --- 00:08:08.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.015 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=1086437 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 1086437 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1086437 ']' 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.015 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.015 [2024-12-07 09:42:36.229251] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:08.015 [2024-12-07 09:42:36.229297] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.015 [2024-12-07 09:42:36.287943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.015 [2024-12-07 09:42:36.328489] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.015 [2024-12-07 09:42:36.328529] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.015 [2024-12-07 09:42:36.328536] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.015 [2024-12-07 09:42:36.328542] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.016 [2024-12-07 09:42:36.328548] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.016 [2024-12-07 09:42:36.328569] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:08.016 [2024-12-07 09:42:36.623485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 ************************************ 00:08:08.016 START TEST lvs_grow_clean 00:08:08.016 ************************************ 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.016 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.275 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:08.275 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:08.533 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3a1cb810-f4b1-45be-8c3e-4e35c4baf392 00:08:08.533 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a1cb810-f4b1-45be-8c3e-4e35c4baf392 00:08:08.533 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:08.790 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:08.790 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:08.790 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3a1cb810-f4b1-45be-8c3e-4e35c4baf392 lvol 150 00:08:08.790 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=72b352b7-5aea-4b00-8835-28fd746aa9bc 00:08:08.790 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.790 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:09.047 [2024-12-07 09:42:37.664363] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:09.047 [2024-12-07 09:42:37.664414] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:09.047 true 00:08:09.047 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a1cb810-f4b1-45be-8c3e-4e35c4baf392 00:08:09.047 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:09.304 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:09.304 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.564 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 72b352b7-5aea-4b00-8835-28fd746aa9bc 00:08:09.565 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.821 [2024-12-07 09:42:38.422631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.821 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:10.080 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1086941 00:08:10.080 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:10.080 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.080 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1086941 /var/tmp/bdevperf.sock 00:08:10.080 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1086941 ']' 00:08:10.080 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.080 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.080 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.080 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.080 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:10.080 [2024-12-07 09:42:38.670799] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:10.080 [2024-12-07 09:42:38.670847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086941 ] 00:08:10.080 [2024-12-07 09:42:38.724908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.080 [2024-12-07 09:42:38.764130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.337 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.337 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:10.337 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:10.595 Nvme0n1 00:08:10.595 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:10.595 [ 00:08:10.595 { 00:08:10.595 "name": "Nvme0n1", 00:08:10.595 "aliases": [ 00:08:10.595 "72b352b7-5aea-4b00-8835-28fd746aa9bc" 00:08:10.595 ], 00:08:10.595 "product_name": "NVMe disk", 00:08:10.595 "block_size": 4096, 00:08:10.595 "num_blocks": 38912, 00:08:10.595 "uuid": "72b352b7-5aea-4b00-8835-28fd746aa9bc", 00:08:10.595 "numa_id": 1, 00:08:10.595 "assigned_rate_limits": { 00:08:10.595 "rw_ios_per_sec": 0, 00:08:10.595 "rw_mbytes_per_sec": 0, 00:08:10.595 "r_mbytes_per_sec": 0, 00:08:10.595 "w_mbytes_per_sec": 0 00:08:10.595 }, 00:08:10.595 "claimed": false, 00:08:10.595 "zoned": false, 00:08:10.595 "supported_io_types": { 00:08:10.595 "read": true, 00:08:10.595 "write": true, 00:08:10.595 "unmap": true, 00:08:10.595 "flush": true, 00:08:10.595 "reset": true, 00:08:10.595 "nvme_admin": true, 00:08:10.595 "nvme_io": true, 00:08:10.595 "nvme_io_md": false, 00:08:10.595 "write_zeroes": true, 00:08:10.595 "zcopy": false, 00:08:10.595 "get_zone_info": false, 00:08:10.595 "zone_management": false, 00:08:10.595 "zone_append": false, 00:08:10.595 "compare": true, 00:08:10.595 "compare_and_write": true, 00:08:10.595 "abort": true, 00:08:10.595 "seek_hole": false, 00:08:10.595 "seek_data": false, 00:08:10.595 "copy": true, 00:08:10.595 "nvme_iov_md": false 00:08:10.595 }, 00:08:10.595 "memory_domains": [ 00:08:10.595 { 00:08:10.595 "dma_device_id": "system", 00:08:10.595 "dma_device_type": 1 00:08:10.595 } 00:08:10.595 ], 00:08:10.595 "driver_specific": { 00:08:10.595 "nvme": [ 00:08:10.595 { 00:08:10.595 "trid": { 00:08:10.595 "trtype": "TCP", 00:08:10.595 "adrfam": "IPv4", 00:08:10.595 "traddr": "10.0.0.2", 00:08:10.595 "trsvcid": "4420", 00:08:10.595 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:10.595 }, 00:08:10.595 "ctrlr_data": { 00:08:10.595 "cntlid": 1, 00:08:10.595 "vendor_id": "0x8086", 00:08:10.595 "model_number": "SPDK bdev Controller", 00:08:10.595 "serial_number": "SPDK0", 00:08:10.595 "firmware_revision": "24.09.1", 00:08:10.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.595 "oacs": { 00:08:10.595 "security": 0, 00:08:10.595 "format": 0, 00:08:10.595 "firmware": 0, 00:08:10.595 "ns_manage": 0 00:08:10.595 }, 00:08:10.595 "multi_ctrlr": true, 00:08:10.595 "ana_reporting": false 00:08:10.595 }, 00:08:10.595 "vs": { 00:08:10.595 "nvme_version": "1.3" 00:08:10.595 }, 00:08:10.595 "ns_data": { 00:08:10.595 "id": 1, 00:08:10.595 "can_share": true 00:08:10.595 } 00:08:10.595 } 00:08:10.595 ], 00:08:10.595 "mp_policy": "active_passive" 00:08:10.595 } 00:08:10.595 } 00:08:10.595 ] 00:08:10.595 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1086955 00:08:10.595 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:10.595 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:10.853 Running I/O for 10 seconds... 00:08:11.791 Latency(us) 00:08:11.791 [2024-12-07T08:42:40.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.791 Nvme0n1 : 1.00 22053.00 86.14 0.00 0.00 0.00 0.00 0.00 00:08:11.791 [2024-12-07T08:42:40.518Z] =================================================================================================================== 00:08:11.792 [2024-12-07T08:42:40.518Z] Total : 22053.00 86.14 0.00 0.00 0.00 0.00 0.00 00:08:11.792 00:08:12.730 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3a1cb810-f4b1-45be-8c3e-4e35c4baf392 00:08:12.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.730 Nvme0n1 : 2.00 22402.00 87.51 0.00 0.00 0.00 0.00 0.00 00:08:12.730 [2024-12-07T08:42:41.456Z] =================================================================================================================== 00:08:12.730 [2024-12-07T08:42:41.456Z] Total : 22402.00 87.51 0.00 0.00 0.00 0.00 0.00 00:08:12.730 00:08:12.989 true 00:08:12.989 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a1cb810-f4b1-45be-8c3e-4e35c4baf392 00:08:12.989 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:13.249 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:13.249 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:13.249 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1086955 00:08:13.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.818 Nvme0n1 : 3.00 22578.33 88.20 0.00 0.00 0.00 0.00 0.00 00:08:13.818 [2024-12-07T08:42:42.544Z] =================================================================================================================== 00:08:13.818 [2024-12-07T08:42:42.544Z] Total : 22578.33 88.20 0.00 0.00 0.00 0.00 0.00 00:08:13.818 00:08:14.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.756 Nvme0n1 : 4.00 22656.50 88.50 0.00 0.00 0.00 0.00 0.00 00:08:14.756 [2024-12-07T08:42:43.482Z] =================================================================================================================== 00:08:14.756 [2024-12-07T08:42:43.482Z] Total : 22656.50 88.50 0.00 0.00 0.00 0.00 0.00 00:08:14.756 00:08:15.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.695 Nvme0n1 : 5.00 22724.60 88.77 0.00 0.00 0.00 0.00 0.00 00:08:15.695 [2024-12-07T08:42:44.421Z] =================================================================================================================== 00:08:15.695 [2024-12-07T08:42:44.421Z] Total : 22724.60 88.77 0.00 0.00 0.00 0.00 0.00 00:08:15.695 00:08:17.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.073 Nvme0n1 : 6.00 22768.67 88.94 0.00 0.00 0.00 0.00 0.00 00:08:17.073 [2024-12-07T08:42:45.799Z] =================================================================================================================== 00:08:17.073 [2024-12-07T08:42:45.799Z] Total : 22768.67 88.94 0.00 0.00 0.00 0.00 0.00 00:08:17.073 00:08:18.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.010 Nvme0n1 : 7.00 22786.71 89.01 0.00 0.00 0.00 0.00 0.00 00:08:18.010 [2024-12-07T08:42:46.736Z] =================================================================================================================== 00:08:18.010 [2024-12-07T08:42:46.736Z] Total : 22786.71 89.01 0.00 0.00 0.00 0.00 0.00 00:08:18.010 00:08:18.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.947 Nvme0n1 : 8.00 22823.38 89.15 0.00 0.00 0.00 0.00 0.00 00:08:18.947 [2024-12-07T08:42:47.673Z] =================================================================================================================== 00:08:18.947 [2024-12-07T08:42:47.673Z] Total : 22823.38 89.15 0.00 0.00 0.00 0.00 0.00 00:08:18.947 00:08:19.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.902 Nvme0n1 : 9.00 22855.67 89.28 0.00 0.00 0.00 0.00 0.00 00:08:19.902 [2024-12-07T08:42:48.628Z] =================================================================================================================== 00:08:19.902 [2024-12-07T08:42:48.628Z] Total : 22855.67 89.28 0.00 0.00 0.00 0.00 0.00 00:08:19.902 00:08:20.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.841 Nvme0n1 : 10.00 22866.00 89.32 0.00 0.00 0.00 0.00 0.00 00:08:20.841 [2024-12-07T08:42:49.567Z] =================================================================================================================== 00:08:20.841 [2024-12-07T08:42:49.567Z] Total : 22866.00 89.32 0.00 0.00 0.00 0.00 0.00 00:08:20.841 00:08:20.841 00:08:20.841 Latency(us) 00:08:20.841 [2024-12-07T08:42:49.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.841 Nvme0n1 : 10.01 22864.82 89.32 0.00 0.00 5595.06 3319.54 13791.05 00:08:20.841 [2024-12-07T08:42:49.567Z] =================================================================================================================== 00:08:20.841 [2024-12-07T08:42:49.567Z] Total : 22864.82 89.32 0.00 0.00 5595.06 3319.54 13791.05 00:08:20.841 { 00:08:20.841 "results": [ 00:08:20.841 { 00:08:20.841 "job": "Nvme0n1", 00:08:20.841 "core_mask": "0x2", 00:08:20.841 "workload": "randwrite", 00:08:20.841 "status": "finished", 00:08:20.841 "queue_depth": 128, 00:08:20.841 "io_size": 4096, 00:08:20.841 "runtime": 10.006113, 00:08:20.841 "iops": 22864.82273386279, 00:08:20.841 "mibps": 89.31571380415153, 00:08:20.841 "io_failed": 0, 00:08:20.841 "io_timeout": 0, 00:08:20.841 "avg_latency_us": 5595.0609442422865, 00:08:20.841 "min_latency_us": 3319.5408695652172, 00:08:20.841 "max_latency_us": 13791.053913043479 00:08:20.841 } 00:08:20.841 ], 00:08:20.841 "core_count": 1 00:08:20.841 } 00:08:20.841 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1086941 00:08:20.841 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1086941 ']' 00:08:20.841 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1086941 00:08:20.841 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:20.841 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.841 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1086941 00:08:20.841 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:20.841 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:20.841 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1086941' 00:08:20.841 killing process with pid 1086941 00:08:20.841 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1086941 00:08:20.841 Received shutdown signal, test time was about 10.000000 seconds 00:08:20.841 00:08:20.841 Latency(us) 00:08:20.841 [2024-12-07T08:42:49.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.841 [2024-12-07T08:42:49.567Z] =================================================================================================================== 00:08:20.841 [2024-12-07T08:42:49.567Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:20.841 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1086941 00:08:21.100 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.359 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:21.618 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a1cb810-f4b1-45be-8c3e-4e35c4baf392 00:08:21.618 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:21.618 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:21.618 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:21.618 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:21.878 [2024-12-07 09:42:50.455302] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:21.878 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a1cb810-f4b1-45be-8c3e-4e35c4baf392 00:08:21.878 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:21.878 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a1cb810-f4b1-45be-8c3e-4e35c4baf392 00:08:21.878 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.878 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.878 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.878 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.878 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.878 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.878 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.878 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:21.878 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a1cb810-f4b1-45be-8c3e-4e35c4baf392 00:08:22.137 request: 00:08:22.137 { 00:08:22.137 "uuid": "3a1cb810-f4b1-45be-8c3e-4e35c4baf392", 00:08:22.137 "method": "bdev_lvol_get_lvstores", 00:08:22.137 "req_id": 1 00:08:22.137 } 00:08:22.137 Got JSON-RPC error response 00:08:22.137 response: 00:08:22.137 { 00:08:22.137 "code": -19, 00:08:22.137 "message": "No such device" 00:08:22.137 } 00:08:22.137 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:22.137 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.137 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:22.137 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.137 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.397 aio_bdev 00:08:22.397 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 72b352b7-5aea-4b00-8835-28fd746aa9bc 00:08:22.397 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=72b352b7-5aea-4b00-8835-28fd746aa9bc 00:08:22.397 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.397 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:22.397 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.397 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.397 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:22.397 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 72b352b7-5aea-4b00-8835-28fd746aa9bc -t 2000 00:08:22.656 [ 00:08:22.656 { 00:08:22.656 "name": "72b352b7-5aea-4b00-8835-28fd746aa9bc", 00:08:22.656 "aliases": [ 00:08:22.656 "lvs/lvol" 00:08:22.656 ], 00:08:22.656 "product_name": "Logical Volume", 00:08:22.656 "block_size": 4096, 00:08:22.656 "num_blocks": 38912, 00:08:22.656 "uuid": "72b352b7-5aea-4b00-8835-28fd746aa9bc", 00:08:22.656 "assigned_rate_limits": { 00:08:22.656 "rw_ios_per_sec": 0, 00:08:22.656 "rw_mbytes_per_sec": 0, 00:08:22.656 "r_mbytes_per_sec": 0, 00:08:22.656 "w_mbytes_per_sec": 0 00:08:22.656 }, 00:08:22.656 "claimed": false, 00:08:22.656 "zoned": false, 00:08:22.656 "supported_io_types": { 00:08:22.656 "read": true, 00:08:22.656 "write": true, 00:08:22.656 "unmap": true, 00:08:22.656 "flush": false, 00:08:22.656 "reset": true, 00:08:22.656 "nvme_admin": false, 00:08:22.656 "nvme_io": false, 00:08:22.656 "nvme_io_md": false, 00:08:22.656 "write_zeroes": true, 00:08:22.656 "zcopy": false, 00:08:22.656 "get_zone_info": false, 00:08:22.656 "zone_management": false, 00:08:22.656 "zone_append": false, 00:08:22.656 "compare": false, 00:08:22.656 "compare_and_write": false, 00:08:22.656 "abort": false, 00:08:22.656 "seek_hole": true, 00:08:22.656 "seek_data": true, 00:08:22.656 "copy": false, 00:08:22.656 "nvme_iov_md": false 00:08:22.656 }, 00:08:22.656 "driver_specific": { 00:08:22.656 "lvol": { 00:08:22.656 "lvol_store_uuid": "3a1cb810-f4b1-45be-8c3e-4e35c4baf392", 00:08:22.656 "base_bdev": "aio_bdev", 00:08:22.656 "thin_provision": false, 00:08:22.656 "num_allocated_clusters": 38, 00:08:22.656 "snapshot": false, 00:08:22.656 "clone": false, 00:08:22.656 "esnap_clone": false 00:08:22.656 } 00:08:22.656 } 00:08:22.656 } 00:08:22.656 ] 00:08:22.656 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:22.656 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a1cb810-f4b1-45be-8c3e-4e35c4baf392 00:08:22.656 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:22.916 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:22.916 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a1cb810-f4b1-45be-8c3e-4e35c4baf392 00:08:22.916 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:22.916 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:22.916 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 72b352b7-5aea-4b00-8835-28fd746aa9bc 00:08:23.176 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a1cb810-f4b1-45be-8c3e-4e35c4baf392 00:08:23.435 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.695 00:08:23.695 real 0m15.589s 00:08:23.695 user 0m15.152s 00:08:23.695 sys 0m1.473s 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:23.695 ************************************ 00:08:23.695 END TEST lvs_grow_clean 00:08:23.695 ************************************ 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.695 ************************************ 00:08:23.695 START TEST lvs_grow_dirty 00:08:23.695 ************************************ 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.695 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.955 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:23.955 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:24.215 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:24.215 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:24.215 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:24.215 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:24.215 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:24.215 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 lvol 150 00:08:24.474 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9 00:08:24.474 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.474 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:24.733 [2024-12-07 09:42:53.293797] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:24.733 [2024-12-07 09:42:53.293843] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:24.733 true 00:08:24.733 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:24.733 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:24.992 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:24.992 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:24.992 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9 00:08:25.251 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:25.509 [2024-12-07 09:42:54.060088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.509 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:25.769 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1089542 00:08:25.769 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:25.769 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:25.769 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1089542 /var/tmp/bdevperf.sock 00:08:25.769 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1089542 ']' 00:08:25.769 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:25.769 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.769 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:25.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:25.769 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.769 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:25.769 [2024-12-07 09:42:54.317370] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:25.769 [2024-12-07 09:42:54.317419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089542 ] 00:08:25.769 [2024-12-07 09:42:54.370842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.769 [2024-12-07 09:42:54.412619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.028 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.028 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:26.028 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:26.286 Nvme0n1 00:08:26.286 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:26.286 [ 00:08:26.286 { 00:08:26.286 "name": "Nvme0n1", 00:08:26.286 "aliases": [ 00:08:26.286 "4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9" 00:08:26.286 ], 00:08:26.287 "product_name": "NVMe disk", 00:08:26.287 "block_size": 4096, 00:08:26.287 "num_blocks": 38912, 00:08:26.287 "uuid": "4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9", 00:08:26.287 "numa_id": 1, 00:08:26.287 "assigned_rate_limits": { 00:08:26.287 "rw_ios_per_sec": 0, 00:08:26.287 "rw_mbytes_per_sec": 0, 00:08:26.287 "r_mbytes_per_sec": 0, 00:08:26.287 "w_mbytes_per_sec": 0 00:08:26.287 }, 00:08:26.287 "claimed": false, 00:08:26.287 "zoned": false, 00:08:26.287 "supported_io_types": { 00:08:26.287 "read": true, 00:08:26.287 "write": true, 00:08:26.287 "unmap": true, 00:08:26.287 "flush": true, 00:08:26.287 "reset": true, 00:08:26.287 "nvme_admin": true, 00:08:26.287 "nvme_io": true, 00:08:26.287 "nvme_io_md": false, 00:08:26.287 "write_zeroes": true, 00:08:26.287 "zcopy": false, 00:08:26.287 "get_zone_info": false, 00:08:26.287 "zone_management": false, 00:08:26.287 "zone_append": false, 00:08:26.287 "compare": true, 00:08:26.287 "compare_and_write": true, 00:08:26.287 "abort": true, 00:08:26.287 "seek_hole": false, 00:08:26.287 "seek_data": false, 00:08:26.287 "copy": true, 00:08:26.287 "nvme_iov_md": false 00:08:26.287 }, 00:08:26.287 "memory_domains": [ 00:08:26.287 { 00:08:26.287 "dma_device_id": "system", 00:08:26.287 "dma_device_type": 1 00:08:26.287 } 00:08:26.287 ], 00:08:26.287 "driver_specific": { 00:08:26.287 "nvme": [ 00:08:26.287 { 00:08:26.287 "trid": { 00:08:26.287 "trtype": "TCP", 00:08:26.287 "adrfam": "IPv4", 00:08:26.287 "traddr": "10.0.0.2", 00:08:26.287 "trsvcid": "4420", 00:08:26.287 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:26.287 }, 00:08:26.287 "ctrlr_data": { 00:08:26.287 "cntlid": 1, 00:08:26.287 "vendor_id": "0x8086", 00:08:26.287 "model_number": "SPDK bdev Controller", 00:08:26.287 "serial_number": "SPDK0", 00:08:26.287 "firmware_revision": "24.09.1", 00:08:26.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:26.287 "oacs": { 00:08:26.287 "security": 0, 00:08:26.287 "format": 0, 00:08:26.287 "firmware": 0, 00:08:26.287 "ns_manage": 0 00:08:26.287 }, 00:08:26.287 "multi_ctrlr": true, 00:08:26.287 "ana_reporting": false 00:08:26.287 }, 00:08:26.287 "vs": { 00:08:26.287 "nvme_version": "1.3" 00:08:26.287 }, 00:08:26.287 "ns_data": { 00:08:26.287 "id": 1, 00:08:26.287 "can_share": true 00:08:26.287 } 00:08:26.287 } 00:08:26.287 ], 00:08:26.287 "mp_policy": "active_passive" 00:08:26.287 } 00:08:26.287 } 00:08:26.287 ] 00:08:26.287 09:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1089771 00:08:26.287 09:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:26.287 09:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:26.545 Running I/O for 10 seconds... 00:08:27.479 Latency(us) 00:08:27.479 [2024-12-07T08:42:56.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.479 Nvme0n1 : 1.00 22485.00 87.83 0.00 0.00 0.00 0.00 0.00 00:08:27.479 [2024-12-07T08:42:56.205Z] =================================================================================================================== 00:08:27.479 [2024-12-07T08:42:56.205Z] Total : 22485.00 87.83 0.00 0.00 0.00 0.00 0.00 00:08:27.479 00:08:28.414 09:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:28.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.415 Nvme0n1 : 2.00 22685.50 88.62 0.00 0.00 0.00 0.00 0.00 00:08:28.415 [2024-12-07T08:42:57.141Z] =================================================================================================================== 00:08:28.415 [2024-12-07T08:42:57.141Z] Total : 22685.50 88.62 0.00 0.00 0.00 0.00 0.00 00:08:28.415 00:08:28.673 true 00:08:28.673 09:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:28.673 09:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:28.948 09:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:28.948 09:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:28.948 09:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1089771 00:08:29.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.515 Nvme0n1 : 3.00 22739.00 88.82 0.00 0.00 0.00 0.00 0.00 00:08:29.515 [2024-12-07T08:42:58.241Z] =================================================================================================================== 00:08:29.515 [2024-12-07T08:42:58.241Z] Total : 22739.00 88.82 0.00 0.00 0.00 0.00 0.00 00:08:29.515 00:08:30.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.449 Nvme0n1 : 4.00 22776.00 88.97 0.00 0.00 0.00 0.00 0.00 00:08:30.449 [2024-12-07T08:42:59.175Z] =================================================================================================================== 00:08:30.449 [2024-12-07T08:42:59.175Z] Total : 22776.00 88.97 0.00 0.00 0.00 0.00 0.00 00:08:30.449 00:08:31.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.593 Nvme0n1 : 5.00 22796.20 89.05 0.00 0.00 0.00 0.00 0.00 00:08:31.593 [2024-12-07T08:43:00.319Z] =================================================================================================================== 00:08:31.593 [2024-12-07T08:43:00.319Z] Total : 22796.20 89.05 0.00 0.00 0.00 0.00 0.00 00:08:31.593 00:08:32.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.537 Nvme0n1 : 6.00 22775.33 88.97 0.00 0.00 0.00 0.00 0.00 00:08:32.537 [2024-12-07T08:43:01.263Z] =================================================================================================================== 00:08:32.537 [2024-12-07T08:43:01.263Z] Total : 22775.33 88.97 0.00 0.00 0.00 0.00 0.00 00:08:32.537 00:08:33.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.473 Nvme0n1 : 7.00 22824.71 89.16 0.00 0.00 0.00 0.00 0.00 00:08:33.473 [2024-12-07T08:43:02.199Z] =================================================================================================================== 00:08:33.473 [2024-12-07T08:43:02.199Z] Total : 22824.71 89.16 0.00 0.00 0.00 0.00 0.00 00:08:33.473 00:08:34.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.410 Nvme0n1 : 8.00 22861.25 89.30 0.00 0.00 0.00 0.00 0.00 00:08:34.410 [2024-12-07T08:43:03.136Z] =================================================================================================================== 00:08:34.410 [2024-12-07T08:43:03.136Z] Total : 22861.25 89.30 0.00 0.00 0.00 0.00 0.00 00:08:34.410 00:08:35.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.789 Nvme0n1 : 9.00 22890.33 89.42 0.00 0.00 0.00 0.00 0.00 00:08:35.789 [2024-12-07T08:43:04.515Z] =================================================================================================================== 00:08:35.789 [2024-12-07T08:43:04.515Z] Total : 22890.33 89.42 0.00 0.00 0.00 0.00 0.00 00:08:35.789 00:08:36.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.725 Nvme0n1 : 10.00 22906.60 89.48 0.00 0.00 0.00 0.00 0.00 00:08:36.725 [2024-12-07T08:43:05.451Z] =================================================================================================================== 00:08:36.725 [2024-12-07T08:43:05.451Z] Total : 22906.60 89.48 0.00 0.00 0.00 0.00 0.00 00:08:36.725 00:08:36.725 00:08:36.725 Latency(us) 00:08:36.725 [2024-12-07T08:43:05.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.725 Nvme0n1 : 10.01 22905.72 89.48 0.00 0.00 5585.08 3433.52 12822.26 00:08:36.725 [2024-12-07T08:43:05.451Z] =================================================================================================================== 00:08:36.725 [2024-12-07T08:43:05.452Z] Total : 22905.72 89.48 0.00 0.00 5585.08 3433.52 12822.26 00:08:36.726 { 00:08:36.726 "results": [ 00:08:36.726 { 00:08:36.726 "job": "Nvme0n1", 00:08:36.726 "core_mask": "0x2", 00:08:36.726 "workload": "randwrite", 00:08:36.726 "status": "finished", 00:08:36.726 "queue_depth": 128, 00:08:36.726 "io_size": 4096, 00:08:36.726 "runtime": 10.005971, 00:08:36.726 "iops": 22905.722992801, 00:08:36.726 "mibps": 89.4754804406289, 00:08:36.726 "io_failed": 0, 00:08:36.726 "io_timeout": 0, 00:08:36.726 "avg_latency_us": 5585.082889748613, 00:08:36.726 "min_latency_us": 3433.5165217391304, 00:08:36.726 "max_latency_us": 12822.260869565218 00:08:36.726 } 00:08:36.726 ], 00:08:36.726 "core_count": 1 00:08:36.726 } 00:08:36.726 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1089542 00:08:36.726 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1089542 ']' 00:08:36.726 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1089542 00:08:36.726 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:36.726 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.726 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1089542 00:08:36.726 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:36.726 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:36.726 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1089542' 00:08:36.726 killing process with pid 1089542 00:08:36.726 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1089542 00:08:36.726 Received shutdown signal, test time was about 10.000000 seconds 00:08:36.726 00:08:36.726 Latency(us) 00:08:36.726 [2024-12-07T08:43:05.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.726 [2024-12-07T08:43:05.452Z] =================================================================================================================== 00:08:36.726 [2024-12-07T08:43:05.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:36.726 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1089542 00:08:36.726 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:36.984 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:37.242 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:37.242 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:37.242 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:37.242 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:37.242 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1086437 00:08:37.242 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1086437 00:08:37.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1086437 Killed "${NVMF_APP[@]}" "$@" 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=1091622 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 1091622 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1091622 ']' 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.500 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:37.500 [2024-12-07 09:43:06.066547] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:37.500 [2024-12-07 09:43:06.066597] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.500 [2024-12-07 09:43:06.123979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.500 [2024-12-07 09:43:06.164270] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.500 [2024-12-07 09:43:06.164310] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.500 [2024-12-07 09:43:06.164317] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.500 [2024-12-07 09:43:06.164323] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.500 [2024-12-07 09:43:06.164328] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.500 [2024-12-07 09:43:06.164346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.759 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.759 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:37.759 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:37.759 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:37.759 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:37.759 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.759 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.759 [2024-12-07 09:43:06.468937] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:37.759 [2024-12-07 09:43:06.469038] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:37.759 [2024-12-07 09:43:06.469066] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:38.017 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:38.017 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9 00:08:38.017 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9 00:08:38.017 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:38.017 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:38.017 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:38.017 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:38.017 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:38.017 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9 -t 2000 00:08:38.276 [ 00:08:38.276 { 00:08:38.276 "name": "4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9", 00:08:38.276 "aliases": [ 00:08:38.276 "lvs/lvol" 00:08:38.276 ], 00:08:38.276 "product_name": "Logical Volume", 00:08:38.276 "block_size": 4096, 00:08:38.276 "num_blocks": 38912, 00:08:38.276 "uuid": "4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9", 00:08:38.276 "assigned_rate_limits": { 00:08:38.276 "rw_ios_per_sec": 0, 00:08:38.276 "rw_mbytes_per_sec": 0, 00:08:38.276 "r_mbytes_per_sec": 0, 00:08:38.276 "w_mbytes_per_sec": 0 00:08:38.276 }, 00:08:38.276 "claimed": false, 00:08:38.276 "zoned": false, 00:08:38.276 "supported_io_types": { 00:08:38.276 "read": true, 00:08:38.276 "write": true, 00:08:38.276 "unmap": true, 00:08:38.276 "flush": false, 00:08:38.276 "reset": true, 00:08:38.276 "nvme_admin": false, 00:08:38.276 "nvme_io": false, 00:08:38.276 "nvme_io_md": false, 00:08:38.276 "write_zeroes": true, 00:08:38.276 "zcopy": false, 00:08:38.276 "get_zone_info": false, 00:08:38.276 "zone_management": false, 00:08:38.276 "zone_append": false, 00:08:38.276 "compare": false, 00:08:38.276 "compare_and_write": false, 00:08:38.276 "abort": false, 00:08:38.276 "seek_hole": true, 00:08:38.276 "seek_data": true, 00:08:38.276 "copy": false, 00:08:38.276 "nvme_iov_md": false 00:08:38.276 }, 00:08:38.276 "driver_specific": { 00:08:38.276 "lvol": { 00:08:38.276 "lvol_store_uuid": "dbd159d3-0478-4ed5-86f1-9bf43aff7238", 00:08:38.276 "base_bdev": "aio_bdev", 00:08:38.276 "thin_provision": false, 00:08:38.276 "num_allocated_clusters": 38, 00:08:38.276 "snapshot": false, 00:08:38.276 "clone": false, 00:08:38.276 "esnap_clone": false 00:08:38.276 } 00:08:38.276 } 00:08:38.276 } 00:08:38.276 ] 00:08:38.276 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:38.276 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:38.276 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:38.535 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:38.535 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:38.535 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:38.795 [2024-12-07 09:43:07.441895] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:38.795 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:39.055 request: 00:08:39.055 { 00:08:39.055 "uuid": "dbd159d3-0478-4ed5-86f1-9bf43aff7238", 00:08:39.055 "method": "bdev_lvol_get_lvstores", 00:08:39.055 "req_id": 1 00:08:39.055 } 00:08:39.055 Got JSON-RPC error response 00:08:39.055 response: 00:08:39.055 { 00:08:39.055 "code": -19, 00:08:39.055 "message": "No such device" 00:08:39.055 } 00:08:39.055 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:39.055 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:39.055 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:39.055 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:39.055 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.314 aio_bdev 00:08:39.314 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9 00:08:39.314 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9 00:08:39.314 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.314 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:39.314 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.314 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.314 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:39.574 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9 -t 2000 00:08:39.574 [ 00:08:39.574 { 00:08:39.574 "name": "4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9", 00:08:39.574 "aliases": [ 00:08:39.574 "lvs/lvol" 00:08:39.574 ], 00:08:39.574 "product_name": "Logical Volume", 00:08:39.574 "block_size": 4096, 00:08:39.574 "num_blocks": 38912, 00:08:39.574 "uuid": "4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9", 00:08:39.574 "assigned_rate_limits": { 00:08:39.574 "rw_ios_per_sec": 0, 00:08:39.574 "rw_mbytes_per_sec": 0, 00:08:39.574 "r_mbytes_per_sec": 0, 00:08:39.574 "w_mbytes_per_sec": 0 00:08:39.574 }, 00:08:39.574 "claimed": false, 00:08:39.574 "zoned": false, 00:08:39.574 "supported_io_types": { 00:08:39.574 "read": true, 00:08:39.574 "write": true, 00:08:39.574 "unmap": true, 00:08:39.574 "flush": false, 00:08:39.574 "reset": true, 00:08:39.574 "nvme_admin": false, 00:08:39.574 "nvme_io": false, 00:08:39.574 "nvme_io_md": false, 00:08:39.574 "write_zeroes": true, 00:08:39.574 "zcopy": false, 00:08:39.574 "get_zone_info": false, 00:08:39.574 "zone_management": false, 00:08:39.574 "zone_append": false, 00:08:39.574 "compare": false, 00:08:39.574 "compare_and_write": false, 00:08:39.574 "abort": false, 00:08:39.574 "seek_hole": true, 00:08:39.574 "seek_data": true, 00:08:39.574 "copy": false, 00:08:39.574 "nvme_iov_md": false 00:08:39.574 }, 00:08:39.574 "driver_specific": { 00:08:39.574 "lvol": { 00:08:39.574 "lvol_store_uuid": "dbd159d3-0478-4ed5-86f1-9bf43aff7238", 00:08:39.574 "base_bdev": "aio_bdev", 00:08:39.574 "thin_provision": false, 00:08:39.574 "num_allocated_clusters": 38, 00:08:39.574 "snapshot": false, 00:08:39.574 "clone": false, 00:08:39.574 "esnap_clone": false 00:08:39.574 } 00:08:39.574 } 00:08:39.574 } 00:08:39.574 ] 00:08:39.574 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:39.574 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:39.574 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:39.833 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:39.833 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:39.833 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:40.093 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:40.093 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4ded6e26-b1d5-4bb7-9c40-cbc7fff46ae9 00:08:40.352 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dbd159d3-0478-4ed5-86f1-9bf43aff7238 00:08:40.352 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.611 00:08:40.611 real 0m16.885s 00:08:40.611 user 0m43.629s 00:08:40.611 sys 0m3.781s 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.611 ************************************ 00:08:40.611 END TEST lvs_grow_dirty 00:08:40.611 ************************************ 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:40.611 nvmf_trace.0 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:40.611 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:40.611 rmmod nvme_tcp 00:08:40.871 rmmod nvme_fabrics 00:08:40.871 rmmod nvme_keyring 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 1091622 ']' 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 1091622 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1091622 ']' 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1091622 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1091622 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1091622' 00:08:40.871 killing process with pid 1091622 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1091622 00:08:40.871 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1091622 00:08:41.131 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:41.131 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:41.131 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:41.131 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:41.131 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:08:41.131 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:08:41.131 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:41.131 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.131 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:41.131 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.131 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.131 09:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.052 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:43.052 00:08:43.052 real 0m40.783s 00:08:43.052 user 1m4.061s 00:08:43.052 sys 0m9.536s 00:08:43.052 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.052 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.052 ************************************ 00:08:43.052 END TEST nvmf_lvs_grow 00:08:43.052 ************************************ 00:08:43.052 09:43:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:43.052 09:43:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:43.052 09:43:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.052 09:43:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.052 ************************************ 00:08:43.052 START TEST nvmf_bdev_io_wait 00:08:43.052 ************************************ 00:08:43.052 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:43.312 * Looking for test storage... 00:08:43.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.312 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:43.312 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:43.312 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:43.312 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:43.312 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.312 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.312 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.312 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.312 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:43.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.313 --rc genhtml_branch_coverage=1 00:08:43.313 --rc genhtml_function_coverage=1 00:08:43.313 --rc genhtml_legend=1 00:08:43.313 --rc geninfo_all_blocks=1 00:08:43.313 --rc geninfo_unexecuted_blocks=1 00:08:43.313 00:08:43.313 ' 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:43.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.313 --rc genhtml_branch_coverage=1 00:08:43.313 --rc genhtml_function_coverage=1 00:08:43.313 --rc genhtml_legend=1 00:08:43.313 --rc geninfo_all_blocks=1 00:08:43.313 --rc geninfo_unexecuted_blocks=1 00:08:43.313 00:08:43.313 ' 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:43.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.313 --rc genhtml_branch_coverage=1 00:08:43.313 --rc genhtml_function_coverage=1 00:08:43.313 --rc genhtml_legend=1 00:08:43.313 --rc geninfo_all_blocks=1 00:08:43.313 --rc geninfo_unexecuted_blocks=1 00:08:43.313 00:08:43.313 ' 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:43.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.313 --rc genhtml_branch_coverage=1 00:08:43.313 --rc genhtml_function_coverage=1 00:08:43.313 --rc genhtml_legend=1 00:08:43.313 --rc geninfo_all_blocks=1 00:08:43.313 --rc geninfo_unexecuted_blocks=1 00:08:43.313 00:08:43.313 ' 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.313 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:43.314 09:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:49.884 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:49.884 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:49.884 Found net devices under 0000:86:00.0: cvl_0_0 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.884 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:49.885 Found net devices under 0000:86:00.1: cvl_0_1 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:08:49.885 00:08:49.885 --- 10.0.0.2 ping statistics --- 00:08:49.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.885 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:08:49.885 00:08:49.885 --- 10.0.0.1 ping statistics --- 00:08:49.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.885 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=1095711 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 1095711 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1095711 ']' 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.885 [2024-12-07 09:43:17.660984] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:49.885 [2024-12-07 09:43:17.661027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.885 [2024-12-07 09:43:17.720550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.885 [2024-12-07 09:43:17.761397] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.885 [2024-12-07 09:43:17.761440] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.885 [2024-12-07 09:43:17.761447] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.885 [2024-12-07 09:43:17.761454] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.885 [2024-12-07 09:43:17.761460] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.885 [2024-12-07 09:43:17.761562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.885 [2024-12-07 09:43:17.761679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.885 [2024-12-07 09:43:17.761746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.885 [2024-12-07 09:43:17.761747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.885 [2024-12-07 09:43:17.922687] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.885 Malloc0 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.885 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.886 [2024-12-07 09:43:17.981332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1095737 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1095739 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:49.886 { 00:08:49.886 "params": { 00:08:49.886 "name": "Nvme$subsystem", 00:08:49.886 "trtype": "$TEST_TRANSPORT", 00:08:49.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.886 "adrfam": "ipv4", 00:08:49.886 "trsvcid": "$NVMF_PORT", 00:08:49.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.886 "hdgst": ${hdgst:-false}, 00:08:49.886 "ddgst": ${ddgst:-false} 00:08:49.886 }, 00:08:49.886 "method": "bdev_nvme_attach_controller" 00:08:49.886 } 00:08:49.886 EOF 00:08:49.886 )") 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1095741 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1095744 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:49.886 { 00:08:49.886 "params": { 00:08:49.886 "name": "Nvme$subsystem", 00:08:49.886 "trtype": "$TEST_TRANSPORT", 00:08:49.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.886 "adrfam": "ipv4", 00:08:49.886 "trsvcid": "$NVMF_PORT", 00:08:49.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.886 "hdgst": ${hdgst:-false}, 00:08:49.886 "ddgst": ${ddgst:-false} 00:08:49.886 }, 00:08:49.886 "method": "bdev_nvme_attach_controller" 00:08:49.886 } 00:08:49.886 EOF 00:08:49.886 )") 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:49.886 { 00:08:49.886 "params": { 00:08:49.886 "name": "Nvme$subsystem", 00:08:49.886 "trtype": "$TEST_TRANSPORT", 00:08:49.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.886 "adrfam": "ipv4", 00:08:49.886 "trsvcid": "$NVMF_PORT", 00:08:49.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.886 "hdgst": ${hdgst:-false}, 00:08:49.886 "ddgst": ${ddgst:-false} 00:08:49.886 }, 00:08:49.886 "method": "bdev_nvme_attach_controller" 00:08:49.886 } 00:08:49.886 EOF 00:08:49.886 )") 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:49.886 { 00:08:49.886 "params": { 00:08:49.886 "name": "Nvme$subsystem", 00:08:49.886 "trtype": "$TEST_TRANSPORT", 00:08:49.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.886 "adrfam": "ipv4", 00:08:49.886 "trsvcid": "$NVMF_PORT", 00:08:49.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.886 "hdgst": ${hdgst:-false}, 00:08:49.886 "ddgst": ${ddgst:-false} 00:08:49.886 }, 00:08:49.886 "method": "bdev_nvme_attach_controller" 00:08:49.886 } 00:08:49.886 EOF 00:08:49.886 )") 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1095737 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:49.886 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:49.886 "params": { 00:08:49.886 "name": "Nvme1", 00:08:49.886 "trtype": "tcp", 00:08:49.886 "traddr": "10.0.0.2", 00:08:49.886 "adrfam": "ipv4", 00:08:49.886 "trsvcid": "4420", 00:08:49.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.886 "hdgst": false, 00:08:49.886 "ddgst": false 00:08:49.886 }, 00:08:49.886 "method": "bdev_nvme_attach_controller" 00:08:49.886 }' 00:08:49.886 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:49.886 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:49.886 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:49.886 "params": { 00:08:49.886 "name": "Nvme1", 00:08:49.886 "trtype": "tcp", 00:08:49.886 "traddr": "10.0.0.2", 00:08:49.886 "adrfam": "ipv4", 00:08:49.886 "trsvcid": "4420", 00:08:49.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.886 "hdgst": false, 00:08:49.886 "ddgst": false 00:08:49.886 }, 00:08:49.886 "method": "bdev_nvme_attach_controller" 00:08:49.886 }' 00:08:49.886 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:49.886 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:49.886 "params": { 00:08:49.886 "name": "Nvme1", 00:08:49.886 "trtype": "tcp", 00:08:49.886 "traddr": "10.0.0.2", 00:08:49.886 "adrfam": "ipv4", 00:08:49.886 "trsvcid": "4420", 00:08:49.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.886 "hdgst": false, 00:08:49.886 "ddgst": false 00:08:49.886 }, 00:08:49.886 "method": "bdev_nvme_attach_controller" 00:08:49.886 }' 00:08:49.886 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:49.886 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:49.886 "params": { 00:08:49.886 "name": "Nvme1", 00:08:49.886 "trtype": "tcp", 00:08:49.886 "traddr": "10.0.0.2", 00:08:49.886 "adrfam": "ipv4", 00:08:49.886 "trsvcid": "4420", 00:08:49.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.886 "hdgst": false, 00:08:49.886 "ddgst": false 00:08:49.886 }, 00:08:49.886 "method": "bdev_nvme_attach_controller" 00:08:49.886 }' 00:08:49.886 [2024-12-07 09:43:18.032288] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:49.886 [2024-12-07 09:43:18.032335] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:49.886 [2024-12-07 09:43:18.034290] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:49.887 [2024-12-07 09:43:18.034290] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:49.887 [2024-12-07 09:43:18.034344] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-07 09:43:18.034344] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:49.887 --proc-type=auto ] 00:08:49.887 [2024-12-07 09:43:18.038253] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:49.887 [2024-12-07 09:43:18.038295] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:49.887 [2024-12-07 09:43:18.173796] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.887 [2024-12-07 09:43:18.199388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:08:49.887 [2024-12-07 09:43:18.266111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.887 [2024-12-07 09:43:18.293788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:08:49.887 [2024-12-07 09:43:18.371184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.887 [2024-12-07 09:43:18.401727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:49.887 [2024-12-07 09:43:18.431573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.887 [2024-12-07 09:43:18.459091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:08:50.144 Running I/O for 1 seconds... 00:08:50.144 Running I/O for 1 seconds... 00:08:50.144 Running I/O for 1 seconds... 00:08:50.401 Running I/O for 1 seconds... 00:08:51.336 11996.00 IOPS, 46.86 MiB/s 00:08:51.336 Latency(us) 00:08:51.336 [2024-12-07T08:43:20.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.336 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:51.336 Nvme1n1 : 1.01 12034.73 47.01 0.00 0.00 10592.21 6468.12 13962.02 00:08:51.336 [2024-12-07T08:43:20.062Z] =================================================================================================================== 00:08:51.336 [2024-12-07T08:43:20.062Z] Total : 12034.73 47.01 0.00 0.00 10592.21 6468.12 13962.02 00:08:51.336 246584.00 IOPS, 963.22 MiB/s 00:08:51.336 Latency(us) 00:08:51.336 [2024-12-07T08:43:20.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.336 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:51.336 Nvme1n1 : 1.00 246193.66 961.69 0.00 0.00 517.12 245.76 1560.04 00:08:51.336 [2024-12-07T08:43:20.062Z] =================================================================================================================== 00:08:51.336 [2024-12-07T08:43:20.062Z] Total : 246193.66 961.69 0.00 0.00 517.12 245.76 1560.04 00:08:51.336 10100.00 IOPS, 39.45 MiB/s 00:08:51.336 Latency(us) 00:08:51.336 [2024-12-07T08:43:20.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.336 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:51.336 Nvme1n1 : 1.01 10169.08 39.72 0.00 0.00 12543.40 5100.41 22909.11 00:08:51.336 [2024-12-07T08:43:20.062Z] =================================================================================================================== 00:08:51.336 [2024-12-07T08:43:20.062Z] Total : 10169.08 39.72 0.00 0.00 12543.40 5100.41 22909.11 00:08:51.336 11161.00 IOPS, 43.60 MiB/s 00:08:51.336 Latency(us) 00:08:51.336 [2024-12-07T08:43:20.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.336 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:51.336 Nvme1n1 : 1.01 11240.61 43.91 0.00 0.00 11357.67 3462.01 23820.91 00:08:51.336 [2024-12-07T08:43:20.062Z] =================================================================================================================== 00:08:51.336 [2024-12-07T08:43:20.062Z] Total : 11240.61 43.91 0.00 0.00 11357.67 3462.01 23820.91 00:08:51.336 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1095739 00:08:51.336 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1095741 00:08:51.336 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1095744 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.594 rmmod nvme_tcp 00:08:51.594 rmmod nvme_fabrics 00:08:51.594 rmmod nvme_keyring 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:51.594 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:51.595 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 1095711 ']' 00:08:51.595 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 1095711 00:08:51.595 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1095711 ']' 00:08:51.595 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1095711 00:08:51.595 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:51.595 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.595 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1095711 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1095711' 00:08:51.853 killing process with pid 1095711 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1095711 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1095711 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.853 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:54.392 00:08:54.392 real 0m10.809s 00:08:54.392 user 0m17.838s 00:08:54.392 sys 0m6.132s 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.392 ************************************ 00:08:54.392 END TEST nvmf_bdev_io_wait 00:08:54.392 ************************************ 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.392 ************************************ 00:08:54.392 START TEST nvmf_queue_depth 00:08:54.392 ************************************ 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:54.392 * Looking for test storage... 00:08:54.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.392 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:54.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.393 --rc genhtml_branch_coverage=1 00:08:54.393 --rc genhtml_function_coverage=1 00:08:54.393 --rc genhtml_legend=1 00:08:54.393 --rc geninfo_all_blocks=1 00:08:54.393 --rc geninfo_unexecuted_blocks=1 00:08:54.393 00:08:54.393 ' 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:54.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.393 --rc genhtml_branch_coverage=1 00:08:54.393 --rc genhtml_function_coverage=1 00:08:54.393 --rc genhtml_legend=1 00:08:54.393 --rc geninfo_all_blocks=1 00:08:54.393 --rc geninfo_unexecuted_blocks=1 00:08:54.393 00:08:54.393 ' 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:54.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.393 --rc genhtml_branch_coverage=1 00:08:54.393 --rc genhtml_function_coverage=1 00:08:54.393 --rc genhtml_legend=1 00:08:54.393 --rc geninfo_all_blocks=1 00:08:54.393 --rc geninfo_unexecuted_blocks=1 00:08:54.393 00:08:54.393 ' 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:54.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.393 --rc genhtml_branch_coverage=1 00:08:54.393 --rc genhtml_function_coverage=1 00:08:54.393 --rc genhtml_legend=1 00:08:54.393 --rc geninfo_all_blocks=1 00:08:54.393 --rc geninfo_unexecuted_blocks=1 00:08:54.393 00:08:54.393 ' 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:54.393 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:59.671 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:59.672 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:59.672 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:59.672 Found net devices under 0000:86:00.0: cvl_0_0 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:59.672 Found net devices under 0000:86:00.1: cvl_0_1 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.672 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:08:59.672 00:08:59.672 --- 10.0.0.2 ping statistics --- 00:08:59.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.672 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:08:59.672 00:08:59.672 --- 10.0.0.1 ping statistics --- 00:08:59.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.672 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:59.672 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=1099560 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 1099560 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1099560 ']' 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.673 [2024-12-07 09:43:28.174774] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:59.673 [2024-12-07 09:43:28.174822] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.673 [2024-12-07 09:43:28.236153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.673 [2024-12-07 09:43:28.276219] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.673 [2024-12-07 09:43:28.276258] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.673 [2024-12-07 09:43:28.276265] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.673 [2024-12-07 09:43:28.276271] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.673 [2024-12-07 09:43:28.276276] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.673 [2024-12-07 09:43:28.276295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.673 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.933 [2024-12-07 09:43:28.406876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.933 Malloc0 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.933 [2024-12-07 09:43:28.459158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1099767 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1099767 /var/tmp/bdevperf.sock 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1099767 ']' 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:59.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.933 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.933 [2024-12-07 09:43:28.506204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:59.933 [2024-12-07 09:43:28.506253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099767 ] 00:08:59.933 [2024-12-07 09:43:28.561192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.933 [2024-12-07 09:43:28.604026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.193 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.193 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:00.193 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:00.193 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.193 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.193 NVMe0n1 00:09:00.193 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.193 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:00.453 Running I/O for 10 seconds... 00:09:02.329 11268.00 IOPS, 44.02 MiB/s [2024-12-07T08:43:31.993Z] 11725.50 IOPS, 45.80 MiB/s [2024-12-07T08:43:33.373Z] 11810.33 IOPS, 46.13 MiB/s [2024-12-07T08:43:34.311Z] 11862.50 IOPS, 46.34 MiB/s [2024-12-07T08:43:35.246Z] 11868.60 IOPS, 46.36 MiB/s [2024-12-07T08:43:36.182Z] 11911.33 IOPS, 46.53 MiB/s [2024-12-07T08:43:37.118Z] 11962.57 IOPS, 46.73 MiB/s [2024-12-07T08:43:38.052Z] 11994.25 IOPS, 46.85 MiB/s [2024-12-07T08:43:39.441Z] 11953.11 IOPS, 46.69 MiB/s [2024-12-07T08:43:39.441Z] 12001.60 IOPS, 46.88 MiB/s 00:09:10.715 Latency(us) 00:09:10.715 [2024-12-07T08:43:39.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.715 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:10.715 Verification LBA range: start 0x0 length 0x4000 00:09:10.715 NVMe0n1 : 10.05 12034.24 47.01 0.00 0.00 84785.02 14588.88 56759.87 00:09:10.715 [2024-12-07T08:43:39.441Z] =================================================================================================================== 00:09:10.715 [2024-12-07T08:43:39.441Z] Total : 12034.24 47.01 0.00 0.00 84785.02 14588.88 56759.87 00:09:10.715 { 00:09:10.715 "results": [ 00:09:10.715 { 00:09:10.715 "job": "NVMe0n1", 00:09:10.715 "core_mask": "0x1", 00:09:10.715 "workload": "verify", 00:09:10.715 "status": "finished", 00:09:10.715 "verify_range": { 00:09:10.715 "start": 0, 00:09:10.715 "length": 16384 00:09:10.715 }, 00:09:10.715 "queue_depth": 1024, 00:09:10.715 "io_size": 4096, 00:09:10.715 "runtime": 10.054646, 00:09:10.715 "iops": 12034.237704639228, 00:09:10.715 "mibps": 47.008741033746986, 00:09:10.715 "io_failed": 0, 00:09:10.715 "io_timeout": 0, 00:09:10.715 "avg_latency_us": 84785.01774303988, 00:09:10.715 "min_latency_us": 14588.88347826087, 00:09:10.715 "max_latency_us": 56759.8747826087 00:09:10.715 } 00:09:10.715 ], 00:09:10.715 "core_count": 1 00:09:10.715 } 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1099767 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1099767 ']' 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1099767 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1099767 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1099767' 00:09:10.715 killing process with pid 1099767 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1099767 00:09:10.715 Received shutdown signal, test time was about 10.000000 seconds 00:09:10.715 00:09:10.715 Latency(us) 00:09:10.715 [2024-12-07T08:43:39.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.715 [2024-12-07T08:43:39.441Z] =================================================================================================================== 00:09:10.715 [2024-12-07T08:43:39.441Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1099767 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.715 rmmod nvme_tcp 00:09:10.715 rmmod nvme_fabrics 00:09:10.715 rmmod nvme_keyring 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 1099560 ']' 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 1099560 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1099560 ']' 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1099560 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1099560 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:10.715 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1099560' 00:09:10.975 killing process with pid 1099560 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1099560 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1099560 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.975 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:13.513 00:09:13.513 real 0m19.077s 00:09:13.513 user 0m22.881s 00:09:13.513 sys 0m5.607s 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.513 ************************************ 00:09:13.513 END TEST nvmf_queue_depth 00:09:13.513 ************************************ 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.513 ************************************ 00:09:13.513 START TEST nvmf_target_multipath 00:09:13.513 ************************************ 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:13.513 * Looking for test storage... 00:09:13.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:13.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.513 --rc genhtml_branch_coverage=1 00:09:13.513 --rc genhtml_function_coverage=1 00:09:13.513 --rc genhtml_legend=1 00:09:13.513 --rc geninfo_all_blocks=1 00:09:13.513 --rc geninfo_unexecuted_blocks=1 00:09:13.513 00:09:13.513 ' 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:13.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.513 --rc genhtml_branch_coverage=1 00:09:13.513 --rc genhtml_function_coverage=1 00:09:13.513 --rc genhtml_legend=1 00:09:13.513 --rc geninfo_all_blocks=1 00:09:13.513 --rc geninfo_unexecuted_blocks=1 00:09:13.513 00:09:13.513 ' 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:13.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.513 --rc genhtml_branch_coverage=1 00:09:13.513 --rc genhtml_function_coverage=1 00:09:13.513 --rc genhtml_legend=1 00:09:13.513 --rc geninfo_all_blocks=1 00:09:13.513 --rc geninfo_unexecuted_blocks=1 00:09:13.513 00:09:13.513 ' 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:13.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.513 --rc genhtml_branch_coverage=1 00:09:13.513 --rc genhtml_function_coverage=1 00:09:13.513 --rc genhtml_legend=1 00:09:13.513 --rc geninfo_all_blocks=1 00:09:13.513 --rc geninfo_unexecuted_blocks=1 00:09:13.513 00:09:13.513 ' 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.513 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:13.514 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:18.793 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.793 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.793 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.793 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.793 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.793 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.793 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.793 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.793 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.793 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:18.793 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.793 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:18.794 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:18.794 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:18.794 Found net devices under 0000:86:00.0: cvl_0_0 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:18.794 Found net devices under 0000:86:00.1: cvl_0_1 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:18.794 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:18.794 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.794 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.794 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:18.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:09:18.795 00:09:18.795 --- 10.0.0.2 ping statistics --- 00:09:18.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.795 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:09:18.795 00:09:18.795 --- 10.0.0.1 ping statistics --- 00:09:18.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.795 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:18.795 only one NIC for nvmf test 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.795 rmmod nvme_tcp 00:09:18.795 rmmod nvme_fabrics 00:09:18.795 rmmod nvme_keyring 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.795 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.703 00:09:20.703 real 0m7.629s 00:09:20.703 user 0m1.703s 00:09:20.703 sys 0m3.930s 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.703 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:20.703 ************************************ 00:09:20.703 END TEST nvmf_target_multipath 00:09:20.703 ************************************ 00:09:20.963 09:43:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:20.963 09:43:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:20.963 09:43:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.963 09:43:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.963 ************************************ 00:09:20.963 START TEST nvmf_zcopy 00:09:20.963 ************************************ 00:09:20.963 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:20.963 * Looking for test storage... 00:09:20.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.963 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:20.963 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:20.963 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:20.963 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:20.963 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.963 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.963 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:20.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.964 --rc genhtml_branch_coverage=1 00:09:20.964 --rc genhtml_function_coverage=1 00:09:20.964 --rc genhtml_legend=1 00:09:20.964 --rc geninfo_all_blocks=1 00:09:20.964 --rc geninfo_unexecuted_blocks=1 00:09:20.964 00:09:20.964 ' 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:20.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.964 --rc genhtml_branch_coverage=1 00:09:20.964 --rc genhtml_function_coverage=1 00:09:20.964 --rc genhtml_legend=1 00:09:20.964 --rc geninfo_all_blocks=1 00:09:20.964 --rc geninfo_unexecuted_blocks=1 00:09:20.964 00:09:20.964 ' 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:20.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.964 --rc genhtml_branch_coverage=1 00:09:20.964 --rc genhtml_function_coverage=1 00:09:20.964 --rc genhtml_legend=1 00:09:20.964 --rc geninfo_all_blocks=1 00:09:20.964 --rc geninfo_unexecuted_blocks=1 00:09:20.964 00:09:20.964 ' 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:20.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.964 --rc genhtml_branch_coverage=1 00:09:20.964 --rc genhtml_function_coverage=1 00:09:20.964 --rc genhtml_legend=1 00:09:20.964 --rc geninfo_all_blocks=1 00:09:20.964 --rc geninfo_unexecuted_blocks=1 00:09:20.964 00:09:20.964 ' 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.964 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.965 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.224 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:21.224 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:21.224 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.224 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:26.504 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:26.504 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:26.505 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:26.505 Found net devices under 0000:86:00.0: cvl_0_0 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:26.505 Found net devices under 0000:86:00.1: cvl_0_1 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:26.505 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:26.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:09:26.764 00:09:26.764 --- 10.0.0.2 ping statistics --- 00:09:26.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.764 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:09:26.764 00:09:26.764 --- 10.0.0.1 ping statistics --- 00:09:26.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.764 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=1108447 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 1108447 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1108447 ']' 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.764 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.764 [2024-12-07 09:43:55.333007] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:26.764 [2024-12-07 09:43:55.333057] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.764 [2024-12-07 09:43:55.391520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.764 [2024-12-07 09:43:55.431868] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.764 [2024-12-07 09:43:55.431906] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.764 [2024-12-07 09:43:55.431914] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.764 [2024-12-07 09:43:55.431920] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.764 [2024-12-07 09:43:55.431925] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.764 [2024-12-07 09:43:55.431968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.023 [2024-12-07 09:43:55.562334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.023 [2024-12-07 09:43:55.582531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.023 malloc0 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:27.023 { 00:09:27.023 "params": { 00:09:27.023 "name": "Nvme$subsystem", 00:09:27.023 "trtype": "$TEST_TRANSPORT", 00:09:27.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:27.023 "adrfam": "ipv4", 00:09:27.023 "trsvcid": "$NVMF_PORT", 00:09:27.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:27.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:27.023 "hdgst": ${hdgst:-false}, 00:09:27.023 "ddgst": ${ddgst:-false} 00:09:27.023 }, 00:09:27.023 "method": "bdev_nvme_attach_controller" 00:09:27.023 } 00:09:27.023 EOF 00:09:27.023 )") 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:09:27.023 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:27.023 "params": { 00:09:27.023 "name": "Nvme1", 00:09:27.023 "trtype": "tcp", 00:09:27.023 "traddr": "10.0.0.2", 00:09:27.023 "adrfam": "ipv4", 00:09:27.023 "trsvcid": "4420", 00:09:27.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:27.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:27.023 "hdgst": false, 00:09:27.023 "ddgst": false 00:09:27.023 }, 00:09:27.023 "method": "bdev_nvme_attach_controller" 00:09:27.023 }' 00:09:27.023 [2024-12-07 09:43:55.676750] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:27.023 [2024-12-07 09:43:55.676794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1108467 ] 00:09:27.023 [2024-12-07 09:43:55.730540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.281 [2024-12-07 09:43:55.771193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.539 Running I/O for 10 seconds... 00:09:29.454 8417.00 IOPS, 65.76 MiB/s [2024-12-07T08:43:59.118Z] 8459.00 IOPS, 66.09 MiB/s [2024-12-07T08:44:00.056Z] 8482.00 IOPS, 66.27 MiB/s [2024-12-07T08:44:01.437Z] 8487.00 IOPS, 66.30 MiB/s [2024-12-07T08:44:02.375Z] 8460.60 IOPS, 66.10 MiB/s [2024-12-07T08:44:03.313Z] 8474.83 IOPS, 66.21 MiB/s [2024-12-07T08:44:04.253Z] 8485.00 IOPS, 66.29 MiB/s [2024-12-07T08:44:05.212Z] 8499.50 IOPS, 66.40 MiB/s [2024-12-07T08:44:06.148Z] 8502.00 IOPS, 66.42 MiB/s [2024-12-07T08:44:06.148Z] 8504.10 IOPS, 66.44 MiB/s 00:09:37.422 Latency(us) 00:09:37.422 [2024-12-07T08:44:06.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.422 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:37.422 Verification LBA range: start 0x0 length 0x1000 00:09:37.422 Nvme1n1 : 10.01 8507.44 66.46 0.00 0.00 15003.54 2122.80 23592.96 00:09:37.422 [2024-12-07T08:44:06.148Z] =================================================================================================================== 00:09:37.422 [2024-12-07T08:44:06.148Z] Total : 8507.44 66.46 0.00 0.00 15003.54 2122.80 23592.96 00:09:37.682 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1110299 00:09:37.682 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:37.682 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.683 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:37.683 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:37.683 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:09:37.683 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:09:37.683 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:37.683 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:37.683 { 00:09:37.683 "params": { 00:09:37.683 "name": "Nvme$subsystem", 00:09:37.683 "trtype": "$TEST_TRANSPORT", 00:09:37.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.683 "adrfam": "ipv4", 00:09:37.683 "trsvcid": "$NVMF_PORT", 00:09:37.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.683 "hdgst": ${hdgst:-false}, 00:09:37.683 "ddgst": ${ddgst:-false} 00:09:37.683 }, 00:09:37.683 "method": "bdev_nvme_attach_controller" 00:09:37.683 } 00:09:37.683 EOF 00:09:37.683 )") 00:09:37.683 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:09:37.683 [2024-12-07 09:44:06.233557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.233591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:09:37.683 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:09:37.683 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:37.683 "params": { 00:09:37.683 "name": "Nvme1", 00:09:37.683 "trtype": "tcp", 00:09:37.683 "traddr": "10.0.0.2", 00:09:37.683 "adrfam": "ipv4", 00:09:37.683 "trsvcid": "4420", 00:09:37.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.683 "hdgst": false, 00:09:37.683 "ddgst": false 00:09:37.683 }, 00:09:37.683 "method": "bdev_nvme_attach_controller" 00:09:37.683 }' 00:09:37.683 [2024-12-07 09:44:06.245552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.245565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.257579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.257588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.269611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.269620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.270789] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:37.683 [2024-12-07 09:44:06.270834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1110299 ] 00:09:37.683 [2024-12-07 09:44:06.281647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.281658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.293677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.293687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.305713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.305722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.317742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.317752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.324829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.683 [2024-12-07 09:44:06.329774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.329788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.341807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.341819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.353837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.353859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.365870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.365882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.366025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.683 [2024-12-07 09:44:06.377916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.377936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.389935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.389956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.683 [2024-12-07 09:44:06.401970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.683 [2024-12-07 09:44:06.401984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.413999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.414010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.426034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.426045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.438062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.438072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.450115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.450136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.462136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.462150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.474165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.474179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.486194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.486204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.498228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.498237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.510261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.510270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.522301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.522315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.534330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.534344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.546363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.546374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.558395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.558404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.570432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.570445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.582461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.582470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.594488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.594498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.606524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.606534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.618560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.618573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.630591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.630602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.642621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.642630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.654655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.654665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.943 [2024-12-07 09:44:06.666689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.943 [2024-12-07 09:44:06.666699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.202 [2024-12-07 09:44:06.678728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.202 [2024-12-07 09:44:06.678746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.202 Running I/O for 5 seconds... 00:09:38.202 [2024-12-07 09:44:06.695191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.202 [2024-12-07 09:44:06.695210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.202 [2024-12-07 09:44:06.709315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.202 [2024-12-07 09:44:06.709335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.202 [2024-12-07 09:44:06.723418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.202 [2024-12-07 09:44:06.723437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.202 [2024-12-07 09:44:06.734382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.202 [2024-12-07 09:44:06.734401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.202 [2024-12-07 09:44:06.743500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.202 [2024-12-07 09:44:06.743520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.202 [2024-12-07 09:44:06.758192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.202 [2024-12-07 09:44:06.758212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.202 [2024-12-07 09:44:06.772380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.202 [2024-12-07 09:44:06.772400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.202 [2024-12-07 09:44:06.786485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.202 [2024-12-07 09:44:06.786508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.202 [2024-12-07 09:44:06.800540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.202 [2024-12-07 09:44:06.800559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.203 [2024-12-07 09:44:06.814502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.203 [2024-12-07 09:44:06.814521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.203 [2024-12-07 09:44:06.828987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.203 [2024-12-07 09:44:06.829005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.203 [2024-12-07 09:44:06.844282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.203 [2024-12-07 09:44:06.844301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.203 [2024-12-07 09:44:06.858643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.203 [2024-12-07 09:44:06.858662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.203 [2024-12-07 09:44:06.873041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.203 [2024-12-07 09:44:06.873060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.203 [2024-12-07 09:44:06.887478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.203 [2024-12-07 09:44:06.887497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.203 [2024-12-07 09:44:06.897978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.203 [2024-12-07 09:44:06.897997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.203 [2024-12-07 09:44:06.912586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.203 [2024-12-07 09:44:06.912605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.203 [2024-12-07 09:44:06.926981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.203 [2024-12-07 09:44:06.927000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.462 [2024-12-07 09:44:06.941500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.462 [2024-12-07 09:44:06.941519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.462 [2024-12-07 09:44:06.954536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.462 [2024-12-07 09:44:06.954555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.462 [2024-12-07 09:44:06.969225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.462 [2024-12-07 09:44:06.969244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:06.980050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:06.980069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:06.994687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:06.994708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.008805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.008825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.019711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.019732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.034592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.034613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.045646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.045674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.060482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.060502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.075727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.075746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.089924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.089944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.104132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.104151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.115104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.115123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.129432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.129452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.143417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.143437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.157718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.157738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.171815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.171834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.463 [2024-12-07 09:44:07.185913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.463 [2024-12-07 09:44:07.185933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.200226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.200245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.211036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.211057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.225812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.225832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.239767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.239787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.253829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.253851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.265773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.265792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.280225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.280244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.294273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.294292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.308574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.308598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.323133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.323152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.338496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.338516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.352759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.352779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.367088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.367107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.377850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.377869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.392462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.392481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.406435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.406454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.420505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.420523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.723 [2024-12-07 09:44:07.434741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.723 [2024-12-07 09:44:07.434760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.449398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.449418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.465177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.465197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.479661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.479681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.493828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.493847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.505218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.505236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.519618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.519638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.533969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.534004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.548881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.548901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.563767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.563788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.574543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.574567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.589289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.589309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.600153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.600182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.614639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.983 [2024-12-07 09:44:07.614657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.983 [2024-12-07 09:44:07.628415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.984 [2024-12-07 09:44:07.628434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.984 [2024-12-07 09:44:07.642352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.984 [2024-12-07 09:44:07.642371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.984 [2024-12-07 09:44:07.656498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.984 [2024-12-07 09:44:07.656517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.984 [2024-12-07 09:44:07.670163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.984 [2024-12-07 09:44:07.670182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.984 [2024-12-07 09:44:07.684415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.984 [2024-12-07 09:44:07.684435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.984 16265.00 IOPS, 127.07 MiB/s [2024-12-07T08:44:07.710Z] [2024-12-07 09:44:07.693987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.984 [2024-12-07 09:44:07.694006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.708566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.708585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.722915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.722934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.733464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.733484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.748349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.748368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.759276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.759295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.768921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.768939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.783723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.783742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.797401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.797420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.811536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.811555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.825540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.825559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.839530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.839552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.854393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.854412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.869085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.869104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.883097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.883116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.897246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.897264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.911378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.243 [2024-12-07 09:44:07.911396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.243 [2024-12-07 09:44:07.925633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.244 [2024-12-07 09:44:07.925652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.244 [2024-12-07 09:44:07.939376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.244 [2024-12-07 09:44:07.939395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.244 [2024-12-07 09:44:07.953988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.244 [2024-12-07 09:44:07.954007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.244 [2024-12-07 09:44:07.964973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.244 [2024-12-07 09:44:07.964993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.503 [2024-12-07 09:44:07.980443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.503 [2024-12-07 09:44:07.980463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.503 [2024-12-07 09:44:07.995795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.503 [2024-12-07 09:44:07.995814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.503 [2024-12-07 09:44:08.010004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.503 [2024-12-07 09:44:08.010023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.503 [2024-12-07 09:44:08.024490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.503 [2024-12-07 09:44:08.024508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.503 [2024-12-07 09:44:08.040334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.503 [2024-12-07 09:44:08.040353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.503 [2024-12-07 09:44:08.054370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.503 [2024-12-07 09:44:08.054388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.503 [2024-12-07 09:44:08.068535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.503 [2024-12-07 09:44:08.068553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.503 [2024-12-07 09:44:08.082564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.503 [2024-12-07 09:44:08.082583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.503 [2024-12-07 09:44:08.097115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.503 [2024-12-07 09:44:08.097135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.504 [2024-12-07 09:44:08.113213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.504 [2024-12-07 09:44:08.113234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.504 [2024-12-07 09:44:08.127444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.504 [2024-12-07 09:44:08.127463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.504 [2024-12-07 09:44:08.141829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.504 [2024-12-07 09:44:08.141847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.504 [2024-12-07 09:44:08.152846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.504 [2024-12-07 09:44:08.152864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.504 [2024-12-07 09:44:08.167168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.504 [2024-12-07 09:44:08.167187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.504 [2024-12-07 09:44:08.181051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.504 [2024-12-07 09:44:08.181070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.504 [2024-12-07 09:44:08.195368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.504 [2024-12-07 09:44:08.195388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.504 [2024-12-07 09:44:08.209542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.504 [2024-12-07 09:44:08.209561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.504 [2024-12-07 09:44:08.220982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.504 [2024-12-07 09:44:08.221001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.235369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.235388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.249665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.249685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.260205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.260225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.274591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.274611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.289138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.289158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.300441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.300460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.314761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.314780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.328865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.328884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.342841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.342864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.357210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.357229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.370804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.370823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.384824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.384845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.399076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.399096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.412991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.413011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.427331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.427351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.438700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.438720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.453671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.453691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.464640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.464659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.763 [2024-12-07 09:44:08.479795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.763 [2024-12-07 09:44:08.479815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.495198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.495218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.509758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.509777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.523529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.523549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.538144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.538174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.553513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.553532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.568191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.568211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.579023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.579042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.594048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.594068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.610030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.610055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.624452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.624471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.639334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.639354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.654857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.654877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.668934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.668960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.683353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.683372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 16332.00 IOPS, 127.59 MiB/s [2024-12-07T08:44:08.749Z] [2024-12-07 09:44:08.694767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.694785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.709604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.709624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.720538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.720557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.729980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.729999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.023 [2024-12-07 09:44:08.744704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.023 [2024-12-07 09:44:08.744724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.755741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.755761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.770222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.770242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.784158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.784177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.798125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.798144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.812048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.812067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.826270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.826289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.840247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.840266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.854381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.854399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.868634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.868657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.882795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.882814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.894256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.894275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.908827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.908846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.919880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.283 [2024-12-07 09:44:08.919898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.283 [2024-12-07 09:44:08.934555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.284 [2024-12-07 09:44:08.934574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.284 [2024-12-07 09:44:08.948185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.284 [2024-12-07 09:44:08.948204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.284 [2024-12-07 09:44:08.962957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.284 [2024-12-07 09:44:08.962975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.284 [2024-12-07 09:44:08.979197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.284 [2024-12-07 09:44:08.979216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.284 [2024-12-07 09:44:08.994150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.284 [2024-12-07 09:44:08.994168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.009718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.009736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.024443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.024463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.035449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.035467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.045518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.045537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.059790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.059809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.073538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.073556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.088066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.088085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.099004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.099022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.114118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.114137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.129632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.129652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.144137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.144156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.157658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.157677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.172046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.172065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.186621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.186640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.197787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.197807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.212204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.212223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.543 [2024-12-07 09:44:09.225662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.543 [2024-12-07 09:44:09.225682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.544 [2024-12-07 09:44:09.239880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.544 [2024-12-07 09:44:09.239898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.544 [2024-12-07 09:44:09.254385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.544 [2024-12-07 09:44:09.254403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.268295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.268315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.282560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.282580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.296023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.296042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.309969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.309988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.325036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.325054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.340162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.340181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.354317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.354336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.368637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.368656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.383261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.383280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.398921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.398940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.413249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.413268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.427314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.427334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.441389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.441410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.456077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.456097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.471556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.471574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.486379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.486397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.502186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.502205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.803 [2024-12-07 09:44:09.516395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.803 [2024-12-07 09:44:09.516414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.530561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.530581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.544301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.544320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.558841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.558861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.572886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.572905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.587344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.587363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.598653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.598672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.613069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.613089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.627417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.627437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.638046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.638065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.652646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.652664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.663441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.663459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.677692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.677711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.691497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.691516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 16366.33 IOPS, 127.86 MiB/s [2024-12-07T08:44:09.789Z] [2024-12-07 09:44:09.705425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.705444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.719726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.719744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.733738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.733757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.747988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.748007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.758833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.758851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.773332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.773352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.063 [2024-12-07 09:44:09.787475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.063 [2024-12-07 09:44:09.787495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.801958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.801978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.816190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.816210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.830425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.830445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.840843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.840862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.855544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.855563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.869396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.869416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.883298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.883317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.897532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.897552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.911850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.911874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.926109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.926129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.936915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.936934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.951208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.951228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.964483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.964502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.979076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.979095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:09.994616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:09.994635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:10.009369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:10.009391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:10.020437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:10.020458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:10.035079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:10.035100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.324 [2024-12-07 09:44:10.043055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.324 [2024-12-07 09:44:10.043074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.052507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.052527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.061705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.061725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.071257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.071277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.080715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.080735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.088017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.088036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.099275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.099295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.108218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.108237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.117058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.117078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.126411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.126435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.135991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.136010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.145698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.145717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.155192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.155212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.163976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.163995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.172859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.172878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.182040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.182059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.191534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.191553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.200291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.200309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.209857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.209875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.219327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.219346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.228853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.228882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.238191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.238209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.247553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.247571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.257111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.257129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.265924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.265943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.274720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.274738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.283477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.283495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.292286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.292304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-12-07 09:44:10.302271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-12-07 09:44:10.302294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.311119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.311137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.320477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.320496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.329836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.329855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.339024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.339043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.347862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.347880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.356742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.356761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.366188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.366206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.375191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.375210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.384867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.384885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.394096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.394114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.402809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.402828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.411723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.411741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.421295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.421314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.430772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.430791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.440077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.440096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.449582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.449601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.459022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.459040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.466134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.466152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.476712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.476737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.485545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.485564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.494364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.494383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.503273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.503292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.512911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.512929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.521605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.521624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.530427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.530446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.539245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.539264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.548844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.548862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.558141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.558159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-12-07 09:44:10.567518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-12-07 09:44:10.567537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.576296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.576314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.585780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.585798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.595011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.595030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.603827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.603845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.612576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.612595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.621883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.621902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.631216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.631245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.640740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.640760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.650193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.650212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.659779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.659798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.669753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.669772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.678744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.678764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.687606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.687624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 16361.50 IOPS, 127.82 MiB/s [2024-12-07T08:44:10.831Z] [2024-12-07 09:44:10.696837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.696856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.707045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.707064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.716103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.716121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.725001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.725018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.733816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.733834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.742843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.742862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.752397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.752416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.761885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.761904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.770561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.770579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.779938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.779965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.788861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.788881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.798309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.798328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.805289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.805306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.816575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.816594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.105 [2024-12-07 09:44:10.825457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.105 [2024-12-07 09:44:10.825475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.364 [2024-12-07 09:44:10.834290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.364 [2024-12-07 09:44:10.834309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.364 [2024-12-07 09:44:10.843514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.364 [2024-12-07 09:44:10.843533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.364 [2024-12-07 09:44:10.852278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.364 [2024-12-07 09:44:10.852296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.364 [2024-12-07 09:44:10.861575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.364 [2024-12-07 09:44:10.861594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.364 [2024-12-07 09:44:10.868600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.364 [2024-12-07 09:44:10.868618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.364 [2024-12-07 09:44:10.880171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.364 [2024-12-07 09:44:10.880190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.364 [2024-12-07 09:44:10.889068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.364 [2024-12-07 09:44:10.889086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.364 [2024-12-07 09:44:10.897755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.364 [2024-12-07 09:44:10.897774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:10.907758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:10.907776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:10.916685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:10.916703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:10.923722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:10.923740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:10.933718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:10.933737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:10.942513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:10.942531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:10.952077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:10.952095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:10.961587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:10.961605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:10.970339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:10.970357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:10.979223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:10.979241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:10.988048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:10.988071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:10.996805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:10.996823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:11.005568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:11.005586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:11.014936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:11.014960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:11.023571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:11.023590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:11.032892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:11.032911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:11.042128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:11.042147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:11.050833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:11.050851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:11.060186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:11.060204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:11.069622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:11.069640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:11.079080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:11.079098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.365 [2024-12-07 09:44:11.087812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.365 [2024-12-07 09:44:11.087830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.624 [2024-12-07 09:44:11.097409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.624 [2024-12-07 09:44:11.097428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.624 [2024-12-07 09:44:11.106351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.624 [2024-12-07 09:44:11.106369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.624 [2024-12-07 09:44:11.115786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.624 [2024-12-07 09:44:11.115805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.624 [2024-12-07 09:44:11.122799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.624 [2024-12-07 09:44:11.122816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.624 [2024-12-07 09:44:11.134036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.624 [2024-12-07 09:44:11.134054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.624 [2024-12-07 09:44:11.142989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.624 [2024-12-07 09:44:11.143006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.624 [2024-12-07 09:44:11.151797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.624 [2024-12-07 09:44:11.151815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.624 [2024-12-07 09:44:11.161295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.624 [2024-12-07 09:44:11.161318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.624 [2024-12-07 09:44:11.170908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.170927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.179669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.179688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.189131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.189149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.198000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.198020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.207525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.207544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.217010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.217030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.223980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.223999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.235325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.235345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.244268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.244287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.253213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.253231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.262781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.262800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.271486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.271506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.280239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.280258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.290396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.290415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.299349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.299368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.308717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.308735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.318086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.318104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.327353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.327372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.336976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.337000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.625 [2024-12-07 09:44:11.346638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.625 [2024-12-07 09:44:11.346657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-12-07 09:44:11.356098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-12-07 09:44:11.356118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-12-07 09:44:11.364852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-12-07 09:44:11.364871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-12-07 09:44:11.373717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-12-07 09:44:11.373736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-12-07 09:44:11.383300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-12-07 09:44:11.383319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-12-07 09:44:11.392575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-12-07 09:44:11.392594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-12-07 09:44:11.401182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-12-07 09:44:11.401201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-12-07 09:44:11.410466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-12-07 09:44:11.410486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-12-07 09:44:11.419114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-12-07 09:44:11.419132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.428446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.428465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.437845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.437864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.446801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.446820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.456299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.456318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.465581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.465601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.474333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.474352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.483555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.483574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.492872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.492891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.502232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.502251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.510794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.510818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.520336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.520357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.530208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.530227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.539165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.539184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.548566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.548585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.558044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.558063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.567780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.567799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.577244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.577263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.586207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.586226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.594872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.594890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-12-07 09:44:11.603752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-07 09:44:11.603770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.145 [2024-12-07 09:44:11.613329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.145 [2024-12-07 09:44:11.613348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.145 [2024-12-07 09:44:11.622101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.145 [2024-12-07 09:44:11.622120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.145 [2024-12-07 09:44:11.631413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.145 [2024-12-07 09:44:11.631431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.145 [2024-12-07 09:44:11.640874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.145 [2024-12-07 09:44:11.640892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.145 [2024-12-07 09:44:11.650418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.145 [2024-12-07 09:44:11.650437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.145 [2024-12-07 09:44:11.659756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.145 [2024-12-07 09:44:11.659775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.145 [2024-12-07 09:44:11.668499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.145 [2024-12-07 09:44:11.668519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.145 [2024-12-07 09:44:11.677300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.145 [2024-12-07 09:44:11.677320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.145 [2024-12-07 09:44:11.684244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.145 [2024-12-07 09:44:11.684266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.145 [2024-12-07 09:44:11.695655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.145 [2024-12-07 09:44:11.695674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.145 16351.40 IOPS, 127.75 MiB/s [2024-12-07T08:44:11.871Z] [2024-12-07 09:44:11.702136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.145 [2024-12-07 09:44:11.702153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.145 00:09:43.145 Latency(us) 00:09:43.145 [2024-12-07T08:44:11.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.146 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:43.146 Nvme1n1 : 5.01 16354.67 127.77 0.00 0.00 7819.56 3248.31 16412.49 00:09:43.146 [2024-12-07T08:44:11.872Z] =================================================================================================================== 00:09:43.146 [2024-12-07T08:44:11.872Z] Total : 16354.67 127.77 0.00 0.00 7819.56 3248.31 16412.49 00:09:43.146 [2024-12-07 09:44:11.709938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.709958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.717959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.717972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.725984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.725997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.734015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.734031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.742032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.742044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.750051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.750062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.758074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.758084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.766092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.766102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.774113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.774123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.782134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.782145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.790157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.790167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.798177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.798188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.806198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.806209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.814229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.814238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.822272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.822284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.830271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.830281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.838290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.838300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.846311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.846320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.854335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.854345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-07 09:44:11.862354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-07 09:44:11.862364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.406 [2024-12-07 09:44:11.870378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.406 [2024-12-07 09:44:11.870387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.406 [2024-12-07 09:44:11.878399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.406 [2024-12-07 09:44:11.878408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1110299) - No such process 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1110299 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.406 delay0 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.406 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:43.406 [2024-12-07 09:44:12.052093] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:49.973 Initializing NVMe Controllers 00:09:49.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:49.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:49.973 Initialization complete. Launching workers. 00:09:49.973 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 196 00:09:49.973 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 483, failed to submit 33 00:09:49.973 success 311, unsuccessful 172, failed 0 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:49.973 rmmod nvme_tcp 00:09:49.973 rmmod nvme_fabrics 00:09:49.973 rmmod nvme_keyring 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 1108447 ']' 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 1108447 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1108447 ']' 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1108447 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1108447 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1108447' 00:09:49.973 killing process with pid 1108447 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1108447 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1108447 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.973 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.078 00:09:52.078 real 0m31.046s 00:09:52.078 user 0m42.016s 00:09:52.078 sys 0m10.832s 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.078 ************************************ 00:09:52.078 END TEST nvmf_zcopy 00:09:52.078 ************************************ 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.078 ************************************ 00:09:52.078 START TEST nvmf_nmic 00:09:52.078 ************************************ 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:52.078 * Looking for test storage... 00:09:52.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:52.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.078 --rc genhtml_branch_coverage=1 00:09:52.078 --rc genhtml_function_coverage=1 00:09:52.078 --rc genhtml_legend=1 00:09:52.078 --rc geninfo_all_blocks=1 00:09:52.078 --rc geninfo_unexecuted_blocks=1 00:09:52.078 00:09:52.078 ' 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:52.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.078 --rc genhtml_branch_coverage=1 00:09:52.078 --rc genhtml_function_coverage=1 00:09:52.078 --rc genhtml_legend=1 00:09:52.078 --rc geninfo_all_blocks=1 00:09:52.078 --rc geninfo_unexecuted_blocks=1 00:09:52.078 00:09:52.078 ' 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:52.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.078 --rc genhtml_branch_coverage=1 00:09:52.078 --rc genhtml_function_coverage=1 00:09:52.078 --rc genhtml_legend=1 00:09:52.078 --rc geninfo_all_blocks=1 00:09:52.078 --rc geninfo_unexecuted_blocks=1 00:09:52.078 00:09:52.078 ' 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:52.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.078 --rc genhtml_branch_coverage=1 00:09:52.078 --rc genhtml_function_coverage=1 00:09:52.078 --rc genhtml_legend=1 00:09:52.078 --rc geninfo_all_blocks=1 00:09:52.078 --rc geninfo_unexecuted_blocks=1 00:09:52.078 00:09:52.078 ' 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.078 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:52.380 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.655 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:57.656 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:57.656 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:57.656 Found net devices under 0000:86:00.0: cvl_0_0 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:57.656 Found net devices under 0000:86:00.1: cvl_0_1 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:09:57.656 00:09:57.656 --- 10.0.0.2 ping statistics --- 00:09:57.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.656 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:09:57.656 00:09:57.656 --- 10.0.0.1 ping statistics --- 00:09:57.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.656 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=1115691 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 1115691 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1115691 ']' 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.656 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.916 [2024-12-07 09:44:26.404670] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:57.916 [2024-12-07 09:44:26.404721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.916 [2024-12-07 09:44:26.462202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.916 [2024-12-07 09:44:26.506381] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.916 [2024-12-07 09:44:26.506421] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.916 [2024-12-07 09:44:26.506428] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.916 [2024-12-07 09:44:26.506434] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.916 [2024-12-07 09:44:26.506439] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.916 [2024-12-07 09:44:26.509966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.916 [2024-12-07 09:44:26.509986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.916 [2024-12-07 09:44:26.513979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.916 [2024-12-07 09:44:26.513981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.916 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.916 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:57.916 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:57.916 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:57.916 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 [2024-12-07 09:44:26.667514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 Malloc0 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 [2024-12-07 09:44:26.721964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:58.175 test case1: single bdev can't be used in multiple subsystems 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 [2024-12-07 09:44:26.749871] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:58.175 [2024-12-07 09:44:26.749892] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:58.175 [2024-12-07 09:44:26.749899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.175 request: 00:09:58.175 { 00:09:58.175 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:58.175 "namespace": { 00:09:58.175 "bdev_name": "Malloc0", 00:09:58.175 "no_auto_visible": false 00:09:58.175 }, 00:09:58.176 "method": "nvmf_subsystem_add_ns", 00:09:58.176 "req_id": 1 00:09:58.176 } 00:09:58.176 Got JSON-RPC error response 00:09:58.176 response: 00:09:58.176 { 00:09:58.176 "code": -32602, 00:09:58.176 "message": "Invalid parameters" 00:09:58.176 } 00:09:58.176 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:58.176 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:58.176 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:58.176 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:58.176 Adding namespace failed - expected result. 00:09:58.176 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:58.176 test case2: host connect to nvmf target in multiple paths 00:09:58.176 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:58.176 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.176 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.176 [2024-12-07 09:44:26.762015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:58.176 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.176 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.556 09:44:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:00.495 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.495 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:00.495 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.495 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:00.495 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:03.032 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:03.032 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:03.032 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.032 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:03.032 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.032 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:03.032 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:03.032 [global] 00:10:03.032 thread=1 00:10:03.032 invalidate=1 00:10:03.032 rw=write 00:10:03.032 time_based=1 00:10:03.032 runtime=1 00:10:03.032 ioengine=libaio 00:10:03.032 direct=1 00:10:03.032 bs=4096 00:10:03.032 iodepth=1 00:10:03.032 norandommap=0 00:10:03.032 numjobs=1 00:10:03.032 00:10:03.032 verify_dump=1 00:10:03.032 verify_backlog=512 00:10:03.032 verify_state_save=0 00:10:03.032 do_verify=1 00:10:03.032 verify=crc32c-intel 00:10:03.032 [job0] 00:10:03.032 filename=/dev/nvme0n1 00:10:03.032 Could not set queue depth (nvme0n1) 00:10:03.032 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.032 fio-3.35 00:10:03.032 Starting 1 thread 00:10:03.966 00:10:03.966 job0: (groupid=0, jobs=1): err= 0: pid=1116764: Sat Dec 7 09:44:32 2024 00:10:03.966 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:03.966 slat (nsec): min=6995, max=41834, avg=8053.20, stdev=1551.01 00:10:03.966 clat (usec): min=195, max=522, avg=274.28, stdev=21.89 00:10:03.966 lat (usec): min=202, max=532, avg=282.33, stdev=21.93 00:10:03.966 clat percentiles (usec): 00:10:03.966 | 1.00th=[ 219], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 255], 00:10:03.966 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:10:03.966 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 302], 95.00th=[ 306], 00:10:03.966 | 99.00th=[ 314], 99.50th=[ 318], 99.90th=[ 326], 99.95th=[ 343], 00:10:03.966 | 99.99th=[ 523] 00:10:03.966 write: IOPS=2286, BW=9147KiB/s (9366kB/s)(9156KiB/1001msec); 0 zone resets 00:10:03.966 slat (nsec): min=10079, max=45541, avg=11279.34, stdev=1677.65 00:10:03.966 clat (usec): min=117, max=377, avg=167.30, stdev=20.78 00:10:03.966 lat (usec): min=128, max=422, avg=178.58, stdev=20.94 00:10:03.966 clat percentiles (usec): 00:10:03.966 | 1.00th=[ 129], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 157], 00:10:03.966 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:10:03.966 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 237], 00:10:03.966 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 265], 99.95th=[ 269], 00:10:03.966 | 99.99th=[ 379] 00:10:03.966 bw ( KiB/s): min= 8968, max= 8968, per=98.04%, avg=8968.00, stdev= 0.00, samples=1 00:10:03.966 iops : min= 2242, max= 2242, avg=2242.00, stdev= 0.00, samples=1 00:10:03.966 lat (usec) : 250=56.51%, 500=43.46%, 750=0.02% 00:10:03.966 cpu : usr=3.80%, sys=6.60%, ctx=4337, majf=0, minf=1 00:10:03.966 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.966 issued rwts: total=2048,2289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.966 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.966 00:10:03.966 Run status group 0 (all jobs): 00:10:03.966 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:03.966 WRITE: bw=9147KiB/s (9366kB/s), 9147KiB/s-9147KiB/s (9366kB/s-9366kB/s), io=9156KiB (9376kB), run=1001-1001msec 00:10:03.966 00:10:03.966 Disk stats (read/write): 00:10:03.966 nvme0n1: ios=1892/2048, merge=0/0, ticks=642/322, in_queue=964, util=95.69% 00:10:03.966 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:04.225 rmmod nvme_tcp 00:10:04.225 rmmod nvme_fabrics 00:10:04.225 rmmod nvme_keyring 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 1115691 ']' 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 1115691 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1115691 ']' 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1115691 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1115691 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1115691' 00:10:04.225 killing process with pid 1115691 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1115691 00:10:04.225 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1115691 00:10:04.484 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:04.484 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:04.484 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:04.484 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:04.484 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:04.484 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:04.484 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:04.485 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.485 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.485 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.485 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.485 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.021 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:07.021 00:10:07.021 real 0m14.547s 00:10:07.021 user 0m32.987s 00:10:07.021 sys 0m4.981s 00:10:07.021 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.021 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.021 ************************************ 00:10:07.021 END TEST nvmf_nmic 00:10:07.021 ************************************ 00:10:07.021 09:44:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:07.021 09:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.021 09:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.021 09:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.021 ************************************ 00:10:07.021 START TEST nvmf_fio_target 00:10:07.021 ************************************ 00:10:07.021 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:07.021 * Looking for test storage... 00:10:07.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.021 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:07.021 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:07.021 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:07.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.022 --rc genhtml_branch_coverage=1 00:10:07.022 --rc genhtml_function_coverage=1 00:10:07.022 --rc genhtml_legend=1 00:10:07.022 --rc geninfo_all_blocks=1 00:10:07.022 --rc geninfo_unexecuted_blocks=1 00:10:07.022 00:10:07.022 ' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:07.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.022 --rc genhtml_branch_coverage=1 00:10:07.022 --rc genhtml_function_coverage=1 00:10:07.022 --rc genhtml_legend=1 00:10:07.022 --rc geninfo_all_blocks=1 00:10:07.022 --rc geninfo_unexecuted_blocks=1 00:10:07.022 00:10:07.022 ' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:07.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.022 --rc genhtml_branch_coverage=1 00:10:07.022 --rc genhtml_function_coverage=1 00:10:07.022 --rc genhtml_legend=1 00:10:07.022 --rc geninfo_all_blocks=1 00:10:07.022 --rc geninfo_unexecuted_blocks=1 00:10:07.022 00:10:07.022 ' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:07.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.022 --rc genhtml_branch_coverage=1 00:10:07.022 --rc genhtml_function_coverage=1 00:10:07.022 --rc genhtml_legend=1 00:10:07.022 --rc geninfo_all_blocks=1 00:10:07.022 --rc geninfo_unexecuted_blocks=1 00:10:07.022 00:10:07.022 ' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.022 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:12.333 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:12.333 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.333 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:12.333 Found net devices under 0000:86:00.0: cvl_0_0 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:12.334 Found net devices under 0000:86:00.1: cvl_0_1 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:10:12.334 00:10:12.334 --- 10.0.0.2 ping statistics --- 00:10:12.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.334 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:10:12.334 00:10:12.334 --- 10.0.0.1 ping statistics --- 00:10:12.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.334 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=1120503 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 1120503 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1120503 ']' 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.334 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.334 [2024-12-07 09:44:40.846126] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:12.334 [2024-12-07 09:44:40.846172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.334 [2024-12-07 09:44:40.903954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.334 [2024-12-07 09:44:40.945710] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.334 [2024-12-07 09:44:40.945751] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.334 [2024-12-07 09:44:40.945758] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.334 [2024-12-07 09:44:40.945765] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.334 [2024-12-07 09:44:40.945770] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.334 [2024-12-07 09:44:40.945806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.334 [2024-12-07 09:44:40.945905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.334 [2024-12-07 09:44:40.945999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.334 [2024-12-07 09:44:40.946001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.334 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.334 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:12.334 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:12.334 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:12.334 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.593 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.593 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:12.593 [2024-12-07 09:44:41.257567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.593 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:12.852 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:12.852 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.111 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:13.111 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.370 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:13.370 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.629 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:13.629 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:13.888 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.888 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:13.888 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.147 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:14.147 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.406 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:14.406 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:14.665 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:14.924 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:14.924 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.924 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:14.924 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:15.185 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.443 [2024-12-07 09:44:43.984994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.443 09:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:15.702 09:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:15.702 09:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:17.083 09:44:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:17.083 09:44:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:17.083 09:44:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.083 09:44:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:17.083 09:44:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:17.083 09:44:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:18.988 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:18.988 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:18.988 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.988 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:18.988 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.988 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:18.988 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:18.988 [global] 00:10:18.988 thread=1 00:10:18.988 invalidate=1 00:10:18.988 rw=write 00:10:18.988 time_based=1 00:10:18.988 runtime=1 00:10:18.988 ioengine=libaio 00:10:18.988 direct=1 00:10:18.988 bs=4096 00:10:18.988 iodepth=1 00:10:18.988 norandommap=0 00:10:18.988 numjobs=1 00:10:18.988 00:10:18.988 verify_dump=1 00:10:18.988 verify_backlog=512 00:10:18.988 verify_state_save=0 00:10:18.988 do_verify=1 00:10:18.988 verify=crc32c-intel 00:10:18.988 [job0] 00:10:18.988 filename=/dev/nvme0n1 00:10:18.988 [job1] 00:10:18.988 filename=/dev/nvme0n2 00:10:18.988 [job2] 00:10:18.988 filename=/dev/nvme0n3 00:10:18.988 [job3] 00:10:18.988 filename=/dev/nvme0n4 00:10:18.988 Could not set queue depth (nvme0n1) 00:10:18.988 Could not set queue depth (nvme0n2) 00:10:18.988 Could not set queue depth (nvme0n3) 00:10:18.988 Could not set queue depth (nvme0n4) 00:10:19.247 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.248 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.248 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.248 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.248 fio-3.35 00:10:19.248 Starting 4 threads 00:10:20.627 00:10:20.627 job0: (groupid=0, jobs=1): err= 0: pid=1121878: Sat Dec 7 09:44:49 2024 00:10:20.627 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:10:20.627 slat (nsec): min=10812, max=21562, avg=20709.27, stdev=2221.89 00:10:20.627 clat (usec): min=40815, max=42009, avg=41242.34, stdev=459.68 00:10:20.627 lat (usec): min=40836, max=42031, avg=41263.05, stdev=459.94 00:10:20.627 clat percentiles (usec): 00:10:20.627 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:20.627 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:20.627 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:20.627 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:20.627 | 99.99th=[42206] 00:10:20.627 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:10:20.627 slat (nsec): min=11027, max=64559, avg=13636.85, stdev=3815.58 00:10:20.627 clat (usec): min=140, max=345, avg=197.63, stdev=28.09 00:10:20.627 lat (usec): min=152, max=409, avg=211.26, stdev=29.05 00:10:20.627 clat percentiles (usec): 00:10:20.627 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:10:20.627 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 200], 00:10:20.627 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 241], 00:10:20.627 | 99.00th=[ 285], 99.50th=[ 322], 99.90th=[ 347], 99.95th=[ 347], 00:10:20.627 | 99.99th=[ 347] 00:10:20.627 bw ( KiB/s): min= 4096, max= 4096, per=50.90%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.627 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.627 lat (usec) : 250=91.95%, 500=3.93% 00:10:20.627 lat (msec) : 50=4.12% 00:10:20.627 cpu : usr=0.39%, sys=0.59%, ctx=535, majf=0, minf=1 00:10:20.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.627 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.627 job1: (groupid=0, jobs=1): err= 0: pid=1121879: Sat Dec 7 09:44:49 2024 00:10:20.627 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:10:20.627 slat (nsec): min=9716, max=24441, avg=22681.10, stdev=3140.65 00:10:20.627 clat (usec): min=40833, max=41981, avg=41525.99, stdev=490.08 00:10:20.627 lat (usec): min=40857, max=42004, avg=41548.67, stdev=489.82 00:10:20.627 clat percentiles (usec): 00:10:20.627 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:20.627 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:10:20.627 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:20.627 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:20.627 | 99.99th=[42206] 00:10:20.627 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:20.627 slat (usec): min=9, max=23774, avg=57.16, stdev=1050.21 00:10:20.627 clat (usec): min=131, max=382, avg=190.50, stdev=32.94 00:10:20.627 lat (usec): min=142, max=24157, avg=247.66, stdev=1059.21 00:10:20.627 clat percentiles (usec): 00:10:20.627 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 159], 00:10:20.627 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 196], 00:10:20.627 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 239], 00:10:20.627 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 383], 99.95th=[ 383], 00:10:20.627 | 99.99th=[ 383] 00:10:20.627 bw ( KiB/s): min= 4096, max= 4096, per=50.90%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.627 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.627 lat (usec) : 250=93.06%, 500=3.00% 00:10:20.627 lat (msec) : 50=3.94% 00:10:20.627 cpu : usr=0.40%, sys=0.40%, ctx=538, majf=0, minf=1 00:10:20.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.627 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.627 job2: (groupid=0, jobs=1): err= 0: pid=1121880: Sat Dec 7 09:44:49 2024 00:10:20.627 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:10:20.627 slat (nsec): min=11402, max=25264, avg=22786.64, stdev=2589.26 00:10:20.627 clat (usec): min=40718, max=42110, avg=41021.32, stdev=255.91 00:10:20.627 lat (usec): min=40744, max=42134, avg=41044.10, stdev=255.56 00:10:20.627 clat percentiles (usec): 00:10:20.627 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:20.627 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:20.627 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:20.627 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:20.627 | 99.99th=[42206] 00:10:20.627 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:10:20.627 slat (nsec): min=11160, max=38588, avg=12685.51, stdev=2252.26 00:10:20.627 clat (usec): min=155, max=314, avg=201.53, stdev=26.72 00:10:20.627 lat (usec): min=167, max=349, avg=214.22, stdev=27.18 00:10:20.627 clat percentiles (usec): 00:10:20.627 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:10:20.627 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 202], 00:10:20.627 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 243], 00:10:20.627 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 314], 99.95th=[ 314], 00:10:20.627 | 99.99th=[ 314] 00:10:20.627 bw ( KiB/s): min= 4096, max= 4096, per=50.90%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.627 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.627 lat (usec) : 250=92.32%, 500=3.56% 00:10:20.627 lat (msec) : 50=4.12% 00:10:20.627 cpu : usr=0.79%, sys=0.59%, ctx=535, majf=0, minf=1 00:10:20.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.627 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.628 job3: (groupid=0, jobs=1): err= 0: pid=1121881: Sat Dec 7 09:44:49 2024 00:10:20.628 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:10:20.628 slat (nsec): min=12699, max=29277, avg=23328.91, stdev=2734.42 00:10:20.628 clat (usec): min=40869, max=41356, avg=40980.98, stdev=100.85 00:10:20.628 lat (usec): min=40894, max=41369, avg=41004.31, stdev=98.71 00:10:20.628 clat percentiles (usec): 00:10:20.628 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:20.628 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:20.628 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:20.628 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:20.628 | 99.99th=[41157] 00:10:20.628 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:10:20.628 slat (nsec): min=11080, max=37334, avg=12511.38, stdev=1995.37 00:10:20.628 clat (usec): min=163, max=306, avg=191.86, stdev=14.48 00:10:20.628 lat (usec): min=175, max=332, avg=204.37, stdev=15.11 00:10:20.628 clat percentiles (usec): 00:10:20.628 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:10:20.628 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:10:20.628 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 206], 95.00th=[ 212], 00:10:20.628 | 99.00th=[ 231], 99.50th=[ 293], 99.90th=[ 306], 99.95th=[ 306], 00:10:20.628 | 99.99th=[ 306] 00:10:20.628 bw ( KiB/s): min= 4096, max= 4096, per=50.90%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.628 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.628 lat (usec) : 250=94.94%, 500=0.94% 00:10:20.628 lat (msec) : 50=4.12% 00:10:20.628 cpu : usr=0.69%, sys=0.69%, ctx=535, majf=0, minf=1 00:10:20.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.628 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.628 00:10:20.628 Run status group 0 (all jobs): 00:10:20.628 READ: bw=342KiB/s (350kB/s), 83.9KiB/s-87.2KiB/s (85.9kB/s-89.3kB/s), io=348KiB (356kB), run=1001-1018msec 00:10:20.628 WRITE: bw=8047KiB/s (8240kB/s), 2012KiB/s-2046KiB/s (2060kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1018msec 00:10:20.628 00:10:20.628 Disk stats (read/write): 00:10:20.628 nvme0n1: ios=67/512, merge=0/0, ticks=728/100, in_queue=828, util=86.67% 00:10:20.628 nvme0n2: ios=41/512, merge=0/0, ticks=1698/97, in_queue=1795, util=98.48% 00:10:20.628 nvme0n3: ios=41/512, merge=0/0, ticks=1689/96, in_queue=1785, util=98.75% 00:10:20.628 nvme0n4: ios=42/512, merge=0/0, ticks=1727/94, in_queue=1821, util=98.74% 00:10:20.628 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:20.628 [global] 00:10:20.628 thread=1 00:10:20.628 invalidate=1 00:10:20.628 rw=randwrite 00:10:20.628 time_based=1 00:10:20.628 runtime=1 00:10:20.628 ioengine=libaio 00:10:20.628 direct=1 00:10:20.628 bs=4096 00:10:20.628 iodepth=1 00:10:20.628 norandommap=0 00:10:20.628 numjobs=1 00:10:20.628 00:10:20.628 verify_dump=1 00:10:20.628 verify_backlog=512 00:10:20.628 verify_state_save=0 00:10:20.628 do_verify=1 00:10:20.628 verify=crc32c-intel 00:10:20.628 [job0] 00:10:20.628 filename=/dev/nvme0n1 00:10:20.628 [job1] 00:10:20.628 filename=/dev/nvme0n2 00:10:20.628 [job2] 00:10:20.628 filename=/dev/nvme0n3 00:10:20.628 [job3] 00:10:20.628 filename=/dev/nvme0n4 00:10:20.628 Could not set queue depth (nvme0n1) 00:10:20.628 Could not set queue depth (nvme0n2) 00:10:20.628 Could not set queue depth (nvme0n3) 00:10:20.628 Could not set queue depth (nvme0n4) 00:10:20.887 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.887 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.887 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.887 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.887 fio-3.35 00:10:20.887 Starting 4 threads 00:10:22.279 00:10:22.279 job0: (groupid=0, jobs=1): err= 0: pid=1122249: Sat Dec 7 09:44:50 2024 00:10:22.279 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:22.279 slat (nsec): min=6741, max=27246, avg=7708.43, stdev=1084.00 00:10:22.279 clat (usec): min=204, max=1056, avg=258.17, stdev=42.52 00:10:22.279 lat (usec): min=212, max=1065, avg=265.87, stdev=42.63 00:10:22.279 clat percentiles (usec): 00:10:22.279 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 235], 00:10:22.279 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:10:22.279 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:10:22.279 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 906], 99.95th=[ 922], 00:10:22.279 | 99.99th=[ 1057] 00:10:22.279 write: IOPS=2437, BW=9750KiB/s (9984kB/s)(9760KiB/1001msec); 0 zone resets 00:10:22.279 slat (nsec): min=9435, max=44571, avg=10632.36, stdev=1504.57 00:10:22.279 clat (usec): min=117, max=336, avg=171.93, stdev=24.01 00:10:22.279 lat (usec): min=128, max=380, avg=182.56, stdev=24.23 00:10:22.279 clat percentiles (usec): 00:10:22.279 | 1.00th=[ 133], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:10:22.279 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 174], 00:10:22.279 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 204], 95.00th=[ 217], 00:10:22.279 | 99.00th=[ 247], 99.50th=[ 269], 99.90th=[ 293], 99.95th=[ 297], 00:10:22.279 | 99.99th=[ 338] 00:10:22.279 bw ( KiB/s): min= 9508, max= 9508, per=32.07%, avg=9508.00, stdev= 0.00, samples=1 00:10:22.279 iops : min= 2377, max= 2377, avg=2377.00, stdev= 0.00, samples=1 00:10:22.279 lat (usec) : 250=75.62%, 500=24.29%, 750=0.02%, 1000=0.04% 00:10:22.279 lat (msec) : 2=0.02% 00:10:22.279 cpu : usr=2.50%, sys=4.10%, ctx=4489, majf=0, minf=1 00:10:22.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.279 issued rwts: total=2048,2440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.279 job1: (groupid=0, jobs=1): err= 0: pid=1122251: Sat Dec 7 09:44:50 2024 00:10:22.279 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:22.279 slat (nsec): min=7526, max=81384, avg=9424.47, stdev=3114.37 00:10:22.279 clat (usec): min=202, max=41297, avg=746.42, stdev=4215.38 00:10:22.279 lat (usec): min=210, max=41306, avg=755.85, stdev=4216.76 00:10:22.279 clat percentiles (usec): 00:10:22.279 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 237], 00:10:22.279 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 277], 00:10:22.279 | 70.00th=[ 293], 80.00th=[ 338], 90.00th=[ 416], 95.00th=[ 486], 00:10:22.279 | 99.00th=[34866], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:22.279 | 99.99th=[41157] 00:10:22.279 write: IOPS=1096, BW=4388KiB/s (4493kB/s)(4392KiB/1001msec); 0 zone resets 00:10:22.279 slat (nsec): min=9413, max=39998, avg=12378.70, stdev=2264.27 00:10:22.279 clat (usec): min=127, max=491, avg=187.20, stdev=31.42 00:10:22.279 lat (usec): min=142, max=503, avg=199.58, stdev=31.93 00:10:22.280 clat percentiles (usec): 00:10:22.280 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 163], 00:10:22.280 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 188], 00:10:22.280 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 239], 95.00th=[ 243], 00:10:22.280 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 343], 99.95th=[ 494], 00:10:22.280 | 99.99th=[ 494] 00:10:22.280 bw ( KiB/s): min= 4096, max= 4096, per=13.82%, avg=4096.00, stdev= 0.00, samples=1 00:10:22.280 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:22.280 lat (usec) : 250=66.35%, 500=32.28%, 750=0.80% 00:10:22.280 lat (msec) : 50=0.57% 00:10:22.280 cpu : usr=1.80%, sys=2.70%, ctx=2124, majf=0, minf=1 00:10:22.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.280 issued rwts: total=1024,1098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.280 job2: (groupid=0, jobs=1): err= 0: pid=1122252: Sat Dec 7 09:44:50 2024 00:10:22.280 read: IOPS=1495, BW=5981KiB/s (6124kB/s)(6160KiB/1030msec) 00:10:22.280 slat (nsec): min=6630, max=27235, avg=8410.05, stdev=1994.33 00:10:22.280 clat (usec): min=209, max=40925, avg=368.54, stdev=1794.01 00:10:22.280 lat (usec): min=217, max=40945, avg=376.95, stdev=1794.46 00:10:22.280 clat percentiles (usec): 00:10:22.280 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 247], 00:10:22.280 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:10:22.280 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 343], 95.00th=[ 474], 00:10:22.280 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[41157], 99.95th=[41157], 00:10:22.280 | 99.99th=[41157] 00:10:22.280 write: IOPS=1988, BW=7953KiB/s (8144kB/s)(8192KiB/1030msec); 0 zone resets 00:10:22.280 slat (nsec): min=9173, max=43441, avg=11341.42, stdev=2274.41 00:10:22.280 clat (usec): min=133, max=466, avg=203.44, stdev=36.93 00:10:22.280 lat (usec): min=153, max=506, avg=214.78, stdev=37.32 00:10:22.280 clat percentiles (usec): 00:10:22.280 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:10:22.280 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:10:22.280 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 249], 95.00th=[ 285], 00:10:22.280 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 371], 99.95th=[ 375], 00:10:22.280 | 99.99th=[ 465] 00:10:22.280 bw ( KiB/s): min= 8160, max= 8224, per=27.63%, avg=8192.00, stdev=45.25, samples=2 00:10:22.280 iops : min= 2040, max= 2056, avg=2048.00, stdev=11.31, samples=2 00:10:22.280 lat (usec) : 250=61.98%, 500=37.10%, 750=0.78% 00:10:22.280 lat (msec) : 4=0.06%, 50=0.08% 00:10:22.280 cpu : usr=2.43%, sys=4.57%, ctx=3588, majf=0, minf=2 00:10:22.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.280 issued rwts: total=1540,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.280 job3: (groupid=0, jobs=1): err= 0: pid=1122253: Sat Dec 7 09:44:50 2024 00:10:22.280 read: IOPS=1517, BW=6069KiB/s (6215kB/s)(6148KiB/1013msec) 00:10:22.280 slat (nsec): min=7594, max=22419, avg=9211.76, stdev=1250.87 00:10:22.280 clat (usec): min=238, max=40976, avg=363.57, stdev=1040.74 00:10:22.280 lat (usec): min=247, max=40988, avg=372.79, stdev=1040.83 00:10:22.280 clat percentiles (usec): 00:10:22.280 | 1.00th=[ 251], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 269], 00:10:22.280 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:10:22.280 | 70.00th=[ 355], 80.00th=[ 441], 90.00th=[ 490], 95.00th=[ 510], 00:10:22.280 | 99.00th=[ 570], 99.50th=[ 627], 99.90th=[ 1029], 99.95th=[41157], 00:10:22.280 | 99.99th=[41157] 00:10:22.280 write: IOPS=2021, BW=8087KiB/s (8281kB/s)(8192KiB/1013msec); 0 zone resets 00:10:22.280 slat (nsec): min=10866, max=46429, avg=12944.26, stdev=2207.79 00:10:22.280 clat (usec): min=145, max=642, avg=196.34, stdev=28.73 00:10:22.280 lat (usec): min=160, max=658, avg=209.29, stdev=29.07 00:10:22.280 clat percentiles (usec): 00:10:22.280 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:10:22.280 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:10:22.280 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 239], 00:10:22.280 | 99.00th=[ 310], 99.50th=[ 334], 99.90th=[ 515], 99.95th=[ 519], 00:10:22.280 | 99.99th=[ 644] 00:10:22.280 bw ( KiB/s): min= 8192, max= 8192, per=27.63%, avg=8192.00, stdev= 0.00, samples=2 00:10:22.280 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:22.280 lat (usec) : 250=55.76%, 500=40.92%, 750=3.24%, 1000=0.03% 00:10:22.280 lat (msec) : 2=0.03%, 50=0.03% 00:10:22.280 cpu : usr=4.15%, sys=4.94%, ctx=3586, majf=0, minf=1 00:10:22.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.280 issued rwts: total=1537,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.280 00:10:22.280 Run status group 0 (all jobs): 00:10:22.280 READ: bw=23.3MiB/s (24.5MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=24.0MiB (25.2MB), run=1001-1030msec 00:10:22.280 WRITE: bw=29.0MiB/s (30.4MB/s), 4388KiB/s-9750KiB/s (4493kB/s-9984kB/s), io=29.8MiB (31.3MB), run=1001-1030msec 00:10:22.280 00:10:22.280 Disk stats (read/write): 00:10:22.280 nvme0n1: ios=1831/2048, merge=0/0, ticks=680/334, in_queue=1014, util=97.39% 00:10:22.280 nvme0n2: ios=539/1024, merge=0/0, ticks=1585/186, in_queue=1771, util=98.48% 00:10:22.280 nvme0n3: ios=1536/1820, merge=0/0, ticks=420/356, in_queue=776, util=88.97% 00:10:22.280 nvme0n4: ios=1565/1536, merge=0/0, ticks=1512/287, in_queue=1799, util=98.43% 00:10:22.280 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:22.280 [global] 00:10:22.280 thread=1 00:10:22.280 invalidate=1 00:10:22.280 rw=write 00:10:22.280 time_based=1 00:10:22.280 runtime=1 00:10:22.280 ioengine=libaio 00:10:22.280 direct=1 00:10:22.280 bs=4096 00:10:22.280 iodepth=128 00:10:22.280 norandommap=0 00:10:22.280 numjobs=1 00:10:22.280 00:10:22.280 verify_dump=1 00:10:22.281 verify_backlog=512 00:10:22.281 verify_state_save=0 00:10:22.281 do_verify=1 00:10:22.281 verify=crc32c-intel 00:10:22.281 [job0] 00:10:22.281 filename=/dev/nvme0n1 00:10:22.281 [job1] 00:10:22.281 filename=/dev/nvme0n2 00:10:22.281 [job2] 00:10:22.281 filename=/dev/nvme0n3 00:10:22.281 [job3] 00:10:22.281 filename=/dev/nvme0n4 00:10:22.281 Could not set queue depth (nvme0n1) 00:10:22.281 Could not set queue depth (nvme0n2) 00:10:22.281 Could not set queue depth (nvme0n3) 00:10:22.281 Could not set queue depth (nvme0n4) 00:10:22.537 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.537 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.537 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.537 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.537 fio-3.35 00:10:22.537 Starting 4 threads 00:10:23.907 00:10:23.907 job0: (groupid=0, jobs=1): err= 0: pid=1122628: Sat Dec 7 09:44:52 2024 00:10:23.907 read: IOPS=4120, BW=16.1MiB/s (16.9MB/s)(16.3MiB/1013msec) 00:10:23.907 slat (nsec): min=1035, max=23371k, avg=117902.84, stdev=842151.45 00:10:23.907 clat (usec): min=4806, max=63704, avg=13938.47, stdev=6316.57 00:10:23.907 lat (usec): min=4815, max=63712, avg=14056.37, stdev=6401.78 00:10:23.907 clat percentiles (usec): 00:10:23.907 | 1.00th=[ 7373], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10290], 00:10:23.907 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12256], 60.00th=[12649], 00:10:23.907 | 70.00th=[13698], 80.00th=[15008], 90.00th=[19530], 95.00th=[29492], 00:10:23.907 | 99.00th=[36439], 99.50th=[47449], 99.90th=[63701], 99.95th=[63701], 00:10:23.907 | 99.99th=[63701] 00:10:23.907 write: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec); 0 zone resets 00:10:23.907 slat (nsec): min=1932, max=13313k, avg=98297.41, stdev=596997.99 00:10:23.907 clat (usec): min=443, max=80472, avg=15244.30, stdev=14338.35 00:10:23.907 lat (usec): min=467, max=80483, avg=15342.59, stdev=14417.05 00:10:23.907 clat percentiles (usec): 00:10:23.907 | 1.00th=[ 1680], 5.00th=[ 4555], 10.00th=[ 5735], 20.00th=[ 8225], 00:10:23.907 | 30.00th=[ 9372], 40.00th=[10421], 50.00th=[10683], 60.00th=[11600], 00:10:23.907 | 70.00th=[12780], 80.00th=[19006], 90.00th=[23725], 95.00th=[55837], 00:10:23.907 | 99.00th=[76022], 99.50th=[79168], 99.90th=[80217], 99.95th=[80217], 00:10:23.907 | 99.99th=[80217] 00:10:23.907 bw ( KiB/s): min=13952, max=22512, per=25.59%, avg=18232.00, stdev=6052.83, samples=2 00:10:23.907 iops : min= 3488, max= 5628, avg=4558.00, stdev=1513.21, samples=2 00:10:23.907 lat (usec) : 500=0.02%, 750=0.09%, 1000=0.14% 00:10:23.907 lat (msec) : 2=0.35%, 4=1.43%, 10=22.98%, 20=60.08%, 50=11.93% 00:10:23.907 lat (msec) : 100=2.97% 00:10:23.907 cpu : usr=3.06%, sys=5.63%, ctx=415, majf=0, minf=1 00:10:23.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:23.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.907 issued rwts: total=4174,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.907 job1: (groupid=0, jobs=1): err= 0: pid=1122629: Sat Dec 7 09:44:52 2024 00:10:23.907 read: IOPS=5596, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1005msec) 00:10:23.907 slat (nsec): min=1359, max=30573k, avg=99167.30, stdev=847897.64 00:10:23.907 clat (usec): min=1621, max=61839, avg=12306.66, stdev=6507.38 00:10:23.907 lat (usec): min=3365, max=61869, avg=12405.82, stdev=6565.76 00:10:23.907 clat percentiles (usec): 00:10:23.907 | 1.00th=[ 4752], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[ 9110], 00:10:23.907 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:10:23.907 | 70.00th=[10945], 80.00th=[13829], 90.00th=[17695], 95.00th=[31065], 00:10:23.907 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:10:23.907 | 99.99th=[61604] 00:10:23.907 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:10:23.907 slat (usec): min=2, max=11724, avg=73.09, stdev=415.35 00:10:23.907 clat (usec): min=1493, max=53061, avg=10267.89, stdev=5724.08 00:10:23.907 lat (usec): min=1506, max=53072, avg=10340.98, stdev=5765.60 00:10:23.907 clat percentiles (usec): 00:10:23.907 | 1.00th=[ 2933], 5.00th=[ 4883], 10.00th=[ 6587], 20.00th=[ 8291], 00:10:23.907 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:10:23.907 | 70.00th=[10028], 80.00th=[10290], 90.00th=[11600], 95.00th=[19268], 00:10:23.907 | 99.00th=[42730], 99.50th=[47449], 99.90th=[53216], 99.95th=[53216], 00:10:23.907 | 99.99th=[53216] 00:10:23.907 bw ( KiB/s): min=20016, max=25040, per=31.62%, avg=22528.00, stdev=3552.50, samples=2 00:10:23.907 iops : min= 5004, max= 6260, avg=5632.00, stdev=888.13, samples=2 00:10:23.907 lat (msec) : 2=0.05%, 4=1.77%, 10=54.38%, 20=37.79%, 50=5.94% 00:10:23.907 lat (msec) : 100=0.06% 00:10:23.907 cpu : usr=3.69%, sys=6.37%, ctx=672, majf=0, minf=1 00:10:23.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:23.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.907 issued rwts: total=5624,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.907 job2: (groupid=0, jobs=1): err= 0: pid=1122630: Sat Dec 7 09:44:52 2024 00:10:23.907 read: IOPS=3527, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1016msec) 00:10:23.907 slat (nsec): min=1400, max=13826k, avg=120334.70, stdev=817193.41 00:10:23.907 clat (usec): min=4334, max=44168, avg=14737.89, stdev=5385.26 00:10:23.907 lat (usec): min=4339, max=44171, avg=14858.23, stdev=5445.22 00:10:23.907 clat percentiles (usec): 00:10:23.908 | 1.00th=[ 5604], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[11600], 00:10:23.908 | 30.00th=[11863], 40.00th=[12911], 50.00th=[14091], 60.00th=[14484], 00:10:23.908 | 70.00th=[15401], 80.00th=[16057], 90.00th=[20055], 95.00th=[27132], 00:10:23.908 | 99.00th=[34866], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:10:23.908 | 99.99th=[44303] 00:10:23.908 write: IOPS=3698, BW=14.4MiB/s (15.1MB/s)(14.7MiB/1016msec); 0 zone resets 00:10:23.908 slat (usec): min=2, max=10063, avg=141.74, stdev=726.03 00:10:23.908 clat (usec): min=3168, max=78735, avg=20049.00, stdev=15286.22 00:10:23.908 lat (usec): min=3178, max=78747, avg=20190.74, stdev=15378.73 00:10:23.908 clat percentiles (usec): 00:10:23.908 | 1.00th=[ 4555], 5.00th=[ 7308], 10.00th=[10028], 20.00th=[11076], 00:10:23.908 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12649], 60.00th=[14746], 00:10:23.908 | 70.00th=[21890], 80.00th=[24249], 90.00th=[44827], 95.00th=[61080], 00:10:23.908 | 99.00th=[72877], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:10:23.908 | 99.99th=[79168] 00:10:23.908 bw ( KiB/s): min= 9024, max=20016, per=20.38%, avg=14520.00, stdev=7772.52, samples=2 00:10:23.908 iops : min= 2256, max= 5004, avg=3630.00, stdev=1943.13, samples=2 00:10:23.908 lat (msec) : 4=0.16%, 10=9.04%, 20=68.16%, 50=19.08%, 100=3.55% 00:10:23.908 cpu : usr=2.66%, sys=4.53%, ctx=409, majf=0, minf=1 00:10:23.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:23.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.908 issued rwts: total=3584,3758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.908 job3: (groupid=0, jobs=1): err= 0: pid=1122631: Sat Dec 7 09:44:52 2024 00:10:23.908 read: IOPS=3644, BW=14.2MiB/s (14.9MB/s)(14.4MiB/1010msec) 00:10:23.908 slat (nsec): min=1157, max=32612k, avg=142855.50, stdev=1280828.76 00:10:23.908 clat (usec): min=2841, max=83486, avg=18318.46, stdev=13115.06 00:10:23.908 lat (usec): min=5029, max=83493, avg=18461.32, stdev=13200.59 00:10:23.908 clat percentiles (usec): 00:10:23.908 | 1.00th=[ 5932], 5.00th=[ 8160], 10.00th=[ 9241], 20.00th=[10683], 00:10:23.908 | 30.00th=[11731], 40.00th=[12387], 50.00th=[13042], 60.00th=[15139], 00:10:23.908 | 70.00th=[18220], 80.00th=[22676], 90.00th=[35390], 95.00th=[42730], 00:10:23.908 | 99.00th=[83362], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:10:23.908 | 99.99th=[83362] 00:10:23.908 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:10:23.908 slat (usec): min=2, max=22575, avg=107.09, stdev=979.79 00:10:23.908 clat (usec): min=868, max=54485, avg=14415.30, stdev=7841.84 00:10:23.908 lat (usec): min=876, max=54490, avg=14522.39, stdev=7889.47 00:10:23.908 clat percentiles (usec): 00:10:23.908 | 1.00th=[ 3097], 5.00th=[ 5735], 10.00th=[ 6521], 20.00th=[ 8717], 00:10:23.908 | 30.00th=[10814], 40.00th=[11863], 50.00th=[12911], 60.00th=[13829], 00:10:23.908 | 70.00th=[14877], 80.00th=[18220], 90.00th=[25560], 95.00th=[29230], 00:10:23.908 | 99.00th=[43779], 99.50th=[43779], 99.90th=[51643], 99.95th=[54264], 00:10:23.908 | 99.99th=[54264] 00:10:23.908 bw ( KiB/s): min=12288, max=20232, per=22.83%, avg=16260.00, stdev=5617.26, samples=2 00:10:23.908 iops : min= 3072, max= 5058, avg=4065.00, stdev=1404.31, samples=2 00:10:23.908 lat (usec) : 1000=0.04% 00:10:23.908 lat (msec) : 2=0.21%, 4=0.86%, 10=19.57%, 20=59.01%, 50=18.54% 00:10:23.908 lat (msec) : 100=1.77% 00:10:23.908 cpu : usr=2.08%, sys=3.96%, ctx=272, majf=0, minf=1 00:10:23.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:23.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.908 issued rwts: total=3681,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.908 00:10:23.908 Run status group 0 (all jobs): 00:10:23.908 READ: bw=65.6MiB/s (68.8MB/s), 13.8MiB/s-21.9MiB/s (14.4MB/s-22.9MB/s), io=66.7MiB (69.9MB), run=1005-1016msec 00:10:23.908 WRITE: bw=69.6MiB/s (72.9MB/s), 14.4MiB/s-21.9MiB/s (15.1MB/s-23.0MB/s), io=70.7MiB (74.1MB), run=1005-1016msec 00:10:23.908 00:10:23.908 Disk stats (read/write): 00:10:23.908 nvme0n1: ios=3949/4096, merge=0/0, ticks=45118/48921, in_queue=94039, util=86.57% 00:10:23.908 nvme0n2: ios=4523/4608, merge=0/0, ticks=56494/48358, in_queue=104852, util=98.17% 00:10:23.908 nvme0n3: ios=3118/3215, merge=0/0, ticks=42062/54413, in_queue=96475, util=96.67% 00:10:23.908 nvme0n4: ios=3130/3248, merge=0/0, ticks=30368/31851, in_queue=62219, util=98.01% 00:10:23.908 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:23.908 [global] 00:10:23.908 thread=1 00:10:23.908 invalidate=1 00:10:23.908 rw=randwrite 00:10:23.908 time_based=1 00:10:23.908 runtime=1 00:10:23.908 ioengine=libaio 00:10:23.908 direct=1 00:10:23.908 bs=4096 00:10:23.908 iodepth=128 00:10:23.908 norandommap=0 00:10:23.908 numjobs=1 00:10:23.908 00:10:23.908 verify_dump=1 00:10:23.908 verify_backlog=512 00:10:23.908 verify_state_save=0 00:10:23.908 do_verify=1 00:10:23.908 verify=crc32c-intel 00:10:23.908 [job0] 00:10:23.908 filename=/dev/nvme0n1 00:10:23.908 [job1] 00:10:23.908 filename=/dev/nvme0n2 00:10:23.908 [job2] 00:10:23.908 filename=/dev/nvme0n3 00:10:23.908 [job3] 00:10:23.908 filename=/dev/nvme0n4 00:10:23.908 Could not set queue depth (nvme0n1) 00:10:23.908 Could not set queue depth (nvme0n2) 00:10:23.908 Could not set queue depth (nvme0n3) 00:10:23.908 Could not set queue depth (nvme0n4) 00:10:23.908 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.908 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.908 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.908 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.908 fio-3.35 00:10:23.908 Starting 4 threads 00:10:25.278 00:10:25.278 job0: (groupid=0, jobs=1): err= 0: pid=1122997: Sat Dec 7 09:44:53 2024 00:10:25.278 read: IOPS=5219, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1005msec) 00:10:25.278 slat (nsec): min=1379, max=10638k, avg=98135.16, stdev=703819.45 00:10:25.278 clat (usec): min=1053, max=21980, avg=12186.26, stdev=2955.00 00:10:25.278 lat (usec): min=4419, max=21984, avg=12284.39, stdev=3001.48 00:10:25.278 clat percentiles (usec): 00:10:25.278 | 1.00th=[ 6652], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10028], 00:10:25.278 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[11863], 00:10:25.278 | 70.00th=[12387], 80.00th=[14484], 90.00th=[16712], 95.00th=[18482], 00:10:25.278 | 99.00th=[20841], 99.50th=[21365], 99.90th=[21890], 99.95th=[21890], 00:10:25.278 | 99.99th=[21890] 00:10:25.278 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:10:25.278 slat (usec): min=2, max=9329, avg=80.58, stdev=473.73 00:10:25.278 clat (usec): min=1532, max=30288, avg=11264.06, stdev=4025.79 00:10:25.278 lat (usec): min=1558, max=30297, avg=11344.64, stdev=4064.61 00:10:25.278 clat percentiles (usec): 00:10:25.278 | 1.00th=[ 3589], 5.00th=[ 6325], 10.00th=[ 7242], 20.00th=[ 9503], 00:10:25.278 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:10:25.278 | 70.00th=[11994], 80.00th=[12125], 90.00th=[13960], 95.00th=[19006], 00:10:25.278 | 99.00th=[27395], 99.50th=[27657], 99.90th=[30278], 99.95th=[30278], 00:10:25.278 | 99.99th=[30278] 00:10:25.278 bw ( KiB/s): min=20480, max=24560, per=29.03%, avg=22520.00, stdev=2885.00, samples=2 00:10:25.278 iops : min= 5120, max= 6140, avg=5630.00, stdev=721.25, samples=2 00:10:25.278 lat (msec) : 2=0.04%, 4=0.85%, 10=25.06%, 20=70.56%, 50=3.50% 00:10:25.278 cpu : usr=5.08%, sys=5.88%, ctx=560, majf=0, minf=1 00:10:25.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:25.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.278 issued rwts: total=5246,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.278 job1: (groupid=0, jobs=1): err= 0: pid=1122999: Sat Dec 7 09:44:53 2024 00:10:25.278 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:10:25.278 slat (nsec): min=1189, max=12990k, avg=92717.68, stdev=627350.89 00:10:25.278 clat (usec): min=1438, max=36049, avg=12400.74, stdev=3546.64 00:10:25.278 lat (usec): min=1444, max=36080, avg=12493.46, stdev=3599.16 00:10:25.278 clat percentiles (usec): 00:10:25.278 | 1.00th=[ 2212], 5.00th=[ 5997], 10.00th=[ 8455], 20.00th=[10814], 00:10:25.278 | 30.00th=[11600], 40.00th=[11731], 50.00th=[12125], 60.00th=[12780], 00:10:25.278 | 70.00th=[13435], 80.00th=[14877], 90.00th=[16188], 95.00th=[17433], 00:10:25.278 | 99.00th=[21890], 99.50th=[27132], 99.90th=[31851], 99.95th=[31851], 00:10:25.278 | 99.99th=[35914] 00:10:25.278 write: IOPS=5465, BW=21.3MiB/s (22.4MB/s)(21.5MiB/1007msec); 0 zone resets 00:10:25.278 slat (usec): min=2, max=10633, avg=78.79, stdev=631.93 00:10:25.278 clat (usec): min=1563, max=41797, avg=11584.63, stdev=4317.07 00:10:25.278 lat (usec): min=1572, max=41810, avg=11663.42, stdev=4351.93 00:10:25.278 clat percentiles (usec): 00:10:25.278 | 1.00th=[ 4146], 5.00th=[ 5866], 10.00th=[ 7832], 20.00th=[ 9241], 00:10:25.278 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11469], 60.00th=[11731], 00:10:25.278 | 70.00th=[11994], 80.00th=[12649], 90.00th=[14615], 95.00th=[17695], 00:10:25.278 | 99.00th=[32900], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:10:25.278 | 99.99th=[41681] 00:10:25.278 bw ( KiB/s): min=20480, max=22528, per=27.72%, avg=21504.00, stdev=1448.15, samples=2 00:10:25.278 iops : min= 5120, max= 5632, avg=5376.00, stdev=362.04, samples=2 00:10:25.278 lat (msec) : 2=0.40%, 4=1.06%, 10=20.52%, 20=75.30%, 50=2.72% 00:10:25.278 cpu : usr=4.47%, sys=6.36%, ctx=295, majf=0, minf=1 00:10:25.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:25.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.278 issued rwts: total=5120,5504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.278 job2: (groupid=0, jobs=1): err= 0: pid=1123000: Sat Dec 7 09:44:53 2024 00:10:25.278 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:10:25.278 slat (nsec): min=1508, max=19268k, avg=158726.83, stdev=948834.36 00:10:25.278 clat (usec): min=9247, max=84407, avg=17663.60, stdev=12786.25 00:10:25.278 lat (usec): min=9437, max=84416, avg=17822.32, stdev=12909.67 00:10:25.278 clat percentiles (usec): 00:10:25.278 | 1.00th=[ 9896], 5.00th=[11076], 10.00th=[12125], 20.00th=[12518], 00:10:25.278 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:10:25.278 | 70.00th=[14877], 80.00th=[15926], 90.00th=[22152], 95.00th=[51643], 00:10:25.278 | 99.00th=[77071], 99.50th=[78119], 99.90th=[84411], 99.95th=[84411], 00:10:25.278 | 99.99th=[84411] 00:10:25.278 write: IOPS=3269, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1008msec); 0 zone resets 00:10:25.278 slat (usec): min=2, max=22134, avg=150.91, stdev=1048.56 00:10:25.279 clat (usec): min=1884, max=86079, avg=22096.55, stdev=15351.10 00:10:25.279 lat (usec): min=9119, max=86086, avg=22247.46, stdev=15379.74 00:10:25.279 clat percentiles (usec): 00:10:25.279 | 1.00th=[10421], 5.00th=[11338], 10.00th=[12125], 20.00th=[13435], 00:10:25.279 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13960], 60.00th=[14877], 00:10:25.279 | 70.00th=[22676], 80.00th=[32113], 90.00th=[40633], 95.00th=[53740], 00:10:25.279 | 99.00th=[78119], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:10:25.279 | 99.99th=[86508] 00:10:25.279 bw ( KiB/s): min=12296, max=13048, per=16.33%, avg=12672.00, stdev=531.74, samples=2 00:10:25.279 iops : min= 3074, max= 3262, avg=3168.00, stdev=132.94, samples=2 00:10:25.279 lat (msec) : 2=0.02%, 10=0.82%, 20=76.05%, 50=16.17%, 100=6.94% 00:10:25.279 cpu : usr=2.58%, sys=4.07%, ctx=318, majf=0, minf=2 00:10:25.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:25.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.279 issued rwts: total=3072,3296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.279 job3: (groupid=0, jobs=1): err= 0: pid=1123001: Sat Dec 7 09:44:53 2024 00:10:25.279 read: IOPS=4950, BW=19.3MiB/s (20.3MB/s)(19.5MiB/1007msec) 00:10:25.279 slat (nsec): min=1375, max=12711k, avg=114678.02, stdev=829718.07 00:10:25.279 clat (usec): min=3093, max=25612, avg=13713.52, stdev=3492.78 00:10:25.279 lat (usec): min=3948, max=27677, avg=13828.20, stdev=3552.44 00:10:25.279 clat percentiles (usec): 00:10:25.279 | 1.00th=[ 5211], 5.00th=[ 8979], 10.00th=[10814], 20.00th=[11600], 00:10:25.279 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13173], 60.00th=[13435], 00:10:25.279 | 70.00th=[13829], 80.00th=[15139], 90.00th=[19268], 95.00th=[21365], 00:10:25.279 | 99.00th=[23987], 99.50th=[24773], 99.90th=[25560], 99.95th=[25560], 00:10:25.279 | 99.99th=[25560] 00:10:25.279 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:10:25.279 slat (usec): min=2, max=9565, avg=76.87, stdev=354.85 00:10:25.279 clat (usec): min=492, max=25612, avg=11550.23, stdev=2604.70 00:10:25.279 lat (usec): min=628, max=25616, avg=11627.10, stdev=2637.42 00:10:25.279 clat percentiles (usec): 00:10:25.279 | 1.00th=[ 3752], 5.00th=[ 5997], 10.00th=[ 7963], 20.00th=[10028], 00:10:25.279 | 30.00th=[11207], 40.00th=[11731], 50.00th=[11994], 60.00th=[12387], 00:10:25.279 | 70.00th=[13042], 80.00th=[13698], 90.00th=[13960], 95.00th=[14091], 00:10:25.279 | 99.00th=[15533], 99.50th=[15795], 99.90th=[24773], 99.95th=[25035], 00:10:25.279 | 99.99th=[25560] 00:10:25.279 bw ( KiB/s): min=20480, max=20480, per=26.40%, avg=20480.00, stdev= 0.00, samples=2 00:10:25.279 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:25.279 lat (usec) : 500=0.01%, 1000=0.04% 00:10:25.279 lat (msec) : 2=0.01%, 4=0.91%, 10=13.55%, 20=81.37%, 50=4.12% 00:10:25.279 cpu : usr=4.37%, sys=4.77%, ctx=607, majf=0, minf=1 00:10:25.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:25.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.279 issued rwts: total=4985,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.279 00:10:25.279 Run status group 0 (all jobs): 00:10:25.279 READ: bw=71.4MiB/s (74.9MB/s), 11.9MiB/s-20.4MiB/s (12.5MB/s-21.4MB/s), io=72.0MiB (75.5MB), run=1005-1008msec 00:10:25.279 WRITE: bw=75.8MiB/s (79.4MB/s), 12.8MiB/s-21.9MiB/s (13.4MB/s-23.0MB/s), io=76.4MiB (80.1MB), run=1005-1008msec 00:10:25.279 00:10:25.279 Disk stats (read/write): 00:10:25.279 nvme0n1: ios=4646/5023, merge=0/0, ticks=53881/51088, in_queue=104969, util=97.80% 00:10:25.279 nvme0n2: ios=4645/4725, merge=0/0, ticks=35125/31161, in_queue=66286, util=97.06% 00:10:25.279 nvme0n3: ios=2560/2687, merge=0/0, ticks=13851/12675, in_queue=26526, util=88.97% 00:10:25.279 nvme0n4: ios=4139/4343, merge=0/0, ticks=56133/49441, in_queue=105574, util=96.75% 00:10:25.279 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:25.279 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1123240 00:10:25.279 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:25.279 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:25.279 [global] 00:10:25.279 thread=1 00:10:25.279 invalidate=1 00:10:25.279 rw=read 00:10:25.279 time_based=1 00:10:25.279 runtime=10 00:10:25.279 ioengine=libaio 00:10:25.279 direct=1 00:10:25.279 bs=4096 00:10:25.279 iodepth=1 00:10:25.279 norandommap=1 00:10:25.279 numjobs=1 00:10:25.279 00:10:25.279 [job0] 00:10:25.279 filename=/dev/nvme0n1 00:10:25.279 [job1] 00:10:25.279 filename=/dev/nvme0n2 00:10:25.279 [job2] 00:10:25.279 filename=/dev/nvme0n3 00:10:25.279 [job3] 00:10:25.279 filename=/dev/nvme0n4 00:10:25.279 Could not set queue depth (nvme0n1) 00:10:25.279 Could not set queue depth (nvme0n2) 00:10:25.279 Could not set queue depth (nvme0n3) 00:10:25.279 Could not set queue depth (nvme0n4) 00:10:25.536 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.536 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.536 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.536 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.536 fio-3.35 00:10:25.536 Starting 4 threads 00:10:28.812 09:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:28.812 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=37543936, buflen=4096 00:10:28.813 fio: pid=1123381, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:28.813 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:28.813 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=29761536, buflen=4096 00:10:28.813 fio: pid=1123380, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:28.813 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:28.813 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:28.813 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:28.813 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:28.813 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14319616, buflen=4096 00:10:28.813 fio: pid=1123378, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:29.069 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=44589056, buflen=4096 00:10:29.069 fio: pid=1123379, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:29.069 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.069 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:29.069 00:10:29.069 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1123378: Sat Dec 7 09:44:57 2024 00:10:29.069 read: IOPS=1115, BW=4461KiB/s (4568kB/s)(13.7MiB/3135msec) 00:10:29.069 slat (usec): min=5, max=18755, avg=16.26, stdev=374.16 00:10:29.069 clat (usec): min=209, max=42244, avg=871.70, stdev=4872.95 00:10:29.069 lat (usec): min=216, max=61000, avg=884.61, stdev=4930.30 00:10:29.069 clat percentiles (usec): 00:10:29.069 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 249], 00:10:29.070 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:10:29.070 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 359], 00:10:29.070 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:29.070 | 99.99th=[42206] 00:10:29.070 bw ( KiB/s): min= 96, max=13184, per=12.22%, avg=4507.50, stdev=5562.75, samples=6 00:10:29.070 iops : min= 24, max= 3296, avg=1126.83, stdev=1390.69, samples=6 00:10:29.070 lat (usec) : 250=20.30%, 500=77.92%, 750=0.29% 00:10:29.070 lat (msec) : 10=0.03%, 50=1.43% 00:10:29.070 cpu : usr=0.38%, sys=0.96%, ctx=3499, majf=0, minf=2 00:10:29.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.070 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.070 issued rwts: total=3497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.070 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1123379: Sat Dec 7 09:44:57 2024 00:10:29.070 read: IOPS=3257, BW=12.7MiB/s (13.3MB/s)(42.5MiB/3342msec) 00:10:29.070 slat (usec): min=5, max=18811, avg=14.54, stdev=335.05 00:10:29.070 clat (usec): min=173, max=42039, avg=288.90, stdev=1125.32 00:10:29.070 lat (usec): min=181, max=60059, avg=303.44, stdev=1233.31 00:10:29.070 clat percentiles (usec): 00:10:29.070 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 231], 00:10:29.070 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:10:29.070 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 330], 00:10:29.070 | 99.00th=[ 400], 99.50th=[ 429], 99.90th=[ 7242], 99.95th=[41157], 00:10:29.070 | 99.99th=[41681] 00:10:29.070 bw ( KiB/s): min= 7952, max=15744, per=37.32%, avg=13765.67, stdev=2922.42, samples=6 00:10:29.070 iops : min= 1988, max= 3936, avg=3441.33, stdev=730.55, samples=6 00:10:29.070 lat (usec) : 250=52.79%, 500=46.94%, 750=0.17% 00:10:29.070 lat (msec) : 10=0.01%, 20=0.01%, 50=0.08% 00:10:29.070 cpu : usr=0.78%, sys=3.02%, ctx=10896, majf=0, minf=1 00:10:29.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.070 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.070 issued rwts: total=10887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.070 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1123380: Sat Dec 7 09:44:57 2024 00:10:29.070 read: IOPS=2491, BW=9964KiB/s (10.2MB/s)(28.4MiB/2917msec) 00:10:29.070 slat (usec): min=5, max=15594, avg=12.44, stdev=248.28 00:10:29.070 clat (usec): min=196, max=42252, avg=384.65, stdev=1852.99 00:10:29.070 lat (usec): min=203, max=42259, avg=397.09, stdev=1869.61 00:10:29.070 clat percentiles (usec): 00:10:29.070 | 1.00th=[ 215], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 255], 00:10:29.070 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:10:29.070 | 70.00th=[ 310], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 420], 00:10:29.070 | 99.00th=[ 502], 99.50th=[ 519], 99.90th=[40633], 99.95th=[41157], 00:10:29.070 | 99.99th=[42206] 00:10:29.070 bw ( KiB/s): min= 824, max=13240, per=25.75%, avg=9497.60, stdev=5058.58, samples=5 00:10:29.070 iops : min= 206, max= 3310, avg=2374.40, stdev=1264.64, samples=5 00:10:29.070 lat (usec) : 250=17.79%, 500=81.20%, 750=0.74% 00:10:29.070 lat (msec) : 10=0.03%, 20=0.01%, 50=0.21% 00:10:29.070 cpu : usr=0.45%, sys=2.64%, ctx=7271, majf=0, minf=2 00:10:29.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.070 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.070 issued rwts: total=7267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.070 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1123381: Sat Dec 7 09:44:57 2024 00:10:29.070 read: IOPS=3386, BW=13.2MiB/s (13.9MB/s)(35.8MiB/2707msec) 00:10:29.070 slat (nsec): min=6494, max=37943, avg=8608.23, stdev=1453.21 00:10:29.070 clat (usec): min=234, max=559, avg=284.34, stdev=15.84 00:10:29.070 lat (usec): min=242, max=595, avg=292.95, stdev=16.06 00:10:29.070 clat percentiles (usec): 00:10:29.070 | 1.00th=[ 255], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 273], 00:10:29.070 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:10:29.070 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 310], 00:10:29.070 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 371], 99.95th=[ 400], 00:10:29.070 | 99.99th=[ 562] 00:10:29.070 bw ( KiB/s): min=13472, max=13848, per=36.90%, avg=13609.60, stdev=143.49, samples=5 00:10:29.070 iops : min= 3368, max= 3462, avg=3402.40, stdev=35.87, samples=5 00:10:29.070 lat (usec) : 250=0.35%, 500=99.63%, 750=0.01% 00:10:29.070 cpu : usr=0.81%, sys=3.73%, ctx=9167, majf=0, minf=2 00:10:29.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.070 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.070 issued rwts: total=9167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.070 00:10:29.070 Run status group 0 (all jobs): 00:10:29.070 READ: bw=36.0MiB/s (37.8MB/s), 4461KiB/s-13.2MiB/s (4568kB/s-13.9MB/s), io=120MiB (126MB), run=2707-3342msec 00:10:29.070 00:10:29.070 Disk stats (read/write): 00:10:29.070 nvme0n1: ios=3496/0, merge=0/0, ticks=3022/0, in_queue=3022, util=94.89% 00:10:29.070 nvme0n2: ios=10755/0, merge=0/0, ticks=3889/0, in_queue=3889, util=98.45% 00:10:29.070 nvme0n3: ios=7159/0, merge=0/0, ticks=3692/0, in_queue=3692, util=98.01% 00:10:29.070 nvme0n4: ios=8855/0, merge=0/0, ticks=2456/0, in_queue=2456, util=96.41% 00:10:29.325 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.325 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:29.581 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.581 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:29.581 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.582 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:29.838 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.838 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1123240 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:30.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:30.095 nvmf hotplug test: fio failed as expected 00:10:30.095 09:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.351 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:30.351 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:30.351 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:30.351 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:30.351 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:30.351 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:30.351 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:30.351 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:30.351 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:30.351 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.351 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:30.351 rmmod nvme_tcp 00:10:30.351 rmmod nvme_fabrics 00:10:30.608 rmmod nvme_keyring 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 1120503 ']' 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 1120503 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1120503 ']' 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1120503 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1120503 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1120503' 00:10:30.608 killing process with pid 1120503 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1120503 00:10:30.608 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1120503 00:10:30.866 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:30.866 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:30.866 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:30.866 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:30.866 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:10:30.866 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:30.866 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:10:30.866 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:30.866 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:30.866 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.866 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.866 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.764 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:32.764 00:10:32.764 real 0m26.194s 00:10:32.764 user 1m45.038s 00:10:32.764 sys 0m8.332s 00:10:32.764 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.764 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.764 ************************************ 00:10:32.764 END TEST nvmf_fio_target 00:10:32.764 ************************************ 00:10:32.764 09:45:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:32.764 09:45:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:32.764 09:45:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.764 09:45:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.023 ************************************ 00:10:33.023 START TEST nvmf_bdevio 00:10:33.023 ************************************ 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:33.023 * Looking for test storage... 00:10:33.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:33.023 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:33.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.024 --rc genhtml_branch_coverage=1 00:10:33.024 --rc genhtml_function_coverage=1 00:10:33.024 --rc genhtml_legend=1 00:10:33.024 --rc geninfo_all_blocks=1 00:10:33.024 --rc geninfo_unexecuted_blocks=1 00:10:33.024 00:10:33.024 ' 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:33.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.024 --rc genhtml_branch_coverage=1 00:10:33.024 --rc genhtml_function_coverage=1 00:10:33.024 --rc genhtml_legend=1 00:10:33.024 --rc geninfo_all_blocks=1 00:10:33.024 --rc geninfo_unexecuted_blocks=1 00:10:33.024 00:10:33.024 ' 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:33.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.024 --rc genhtml_branch_coverage=1 00:10:33.024 --rc genhtml_function_coverage=1 00:10:33.024 --rc genhtml_legend=1 00:10:33.024 --rc geninfo_all_blocks=1 00:10:33.024 --rc geninfo_unexecuted_blocks=1 00:10:33.024 00:10:33.024 ' 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:33.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.024 --rc genhtml_branch_coverage=1 00:10:33.024 --rc genhtml_function_coverage=1 00:10:33.024 --rc genhtml_legend=1 00:10:33.024 --rc geninfo_all_blocks=1 00:10:33.024 --rc geninfo_unexecuted_blocks=1 00:10:33.024 00:10:33.024 ' 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.024 09:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:38.291 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:38.291 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:38.291 Found net devices under 0000:86:00.0: cvl_0_0 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.291 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:38.292 Found net devices under 0000:86:00.1: cvl_0_1 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:38.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:10:38.292 00:10:38.292 --- 10.0.0.2 ping statistics --- 00:10:38.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.292 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:10:38.292 00:10:38.292 --- 10.0.0.1 ping statistics --- 00:10:38.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.292 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=1127744 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 1127744 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1127744 ']' 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:38.292 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.292 [2024-12-07 09:45:06.938456] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:38.292 [2024-12-07 09:45:06.938502] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.292 [2024-12-07 09:45:06.998256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.550 [2024-12-07 09:45:07.039056] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.550 [2024-12-07 09:45:07.039099] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.550 [2024-12-07 09:45:07.039106] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.550 [2024-12-07 09:45:07.039112] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.550 [2024-12-07 09:45:07.039117] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.550 [2024-12-07 09:45:07.039246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:38.550 [2024-12-07 09:45:07.039336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:38.550 [2024-12-07 09:45:07.039422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.550 [2024-12-07 09:45:07.039423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.550 [2024-12-07 09:45:07.195138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.550 Malloc0 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.550 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.551 [2024-12-07 09:45:07.249533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:38.551 { 00:10:38.551 "params": { 00:10:38.551 "name": "Nvme$subsystem", 00:10:38.551 "trtype": "$TEST_TRANSPORT", 00:10:38.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:38.551 "adrfam": "ipv4", 00:10:38.551 "trsvcid": "$NVMF_PORT", 00:10:38.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:38.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:38.551 "hdgst": ${hdgst:-false}, 00:10:38.551 "ddgst": ${ddgst:-false} 00:10:38.551 }, 00:10:38.551 "method": "bdev_nvme_attach_controller" 00:10:38.551 } 00:10:38.551 EOF 00:10:38.551 )") 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:38.551 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:38.551 "params": { 00:10:38.551 "name": "Nvme1", 00:10:38.551 "trtype": "tcp", 00:10:38.551 "traddr": "10.0.0.2", 00:10:38.551 "adrfam": "ipv4", 00:10:38.551 "trsvcid": "4420", 00:10:38.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:38.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:38.551 "hdgst": false, 00:10:38.551 "ddgst": false 00:10:38.551 }, 00:10:38.551 "method": "bdev_nvme_attach_controller" 00:10:38.551 }' 00:10:38.809 [2024-12-07 09:45:07.297931] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:38.809 [2024-12-07 09:45:07.297983] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127864 ] 00:10:38.809 [2024-12-07 09:45:07.354069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:38.809 [2024-12-07 09:45:07.395994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.809 [2024-12-07 09:45:07.396090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.809 [2024-12-07 09:45:07.396093] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.066 I/O targets: 00:10:39.066 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:39.066 00:10:39.066 00:10:39.066 CUnit - A unit testing framework for C - Version 2.1-3 00:10:39.066 http://cunit.sourceforge.net/ 00:10:39.066 00:10:39.066 00:10:39.066 Suite: bdevio tests on: Nvme1n1 00:10:39.066 Test: blockdev write read block ...passed 00:10:39.066 Test: blockdev write zeroes read block ...passed 00:10:39.066 Test: blockdev write zeroes read no split ...passed 00:10:39.066 Test: blockdev write zeroes read split ...passed 00:10:39.324 Test: blockdev write zeroes read split partial ...passed 00:10:39.324 Test: blockdev reset ...[2024-12-07 09:45:07.825691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:39.324 [2024-12-07 09:45:07.825758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21684a0 (9): Bad file descriptor 00:10:39.324 [2024-12-07 09:45:07.877543] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:39.324 passed 00:10:39.324 Test: blockdev write read 8 blocks ...passed 00:10:39.324 Test: blockdev write read size > 128k ...passed 00:10:39.324 Test: blockdev write read invalid size ...passed 00:10:39.324 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:39.324 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:39.324 Test: blockdev write read max offset ...passed 00:10:39.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:39.324 Test: blockdev writev readv 8 blocks ...passed 00:10:39.582 Test: blockdev writev readv 30 x 1block ...passed 00:10:39.582 Test: blockdev writev readv block ...passed 00:10:39.582 Test: blockdev writev readv size > 128k ...passed 00:10:39.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:39.582 Test: blockdev comparev and writev ...[2024-12-07 09:45:08.127861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.582 [2024-12-07 09:45:08.127896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:39.582 [2024-12-07 09:45:08.127911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.582 [2024-12-07 09:45:08.127919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:39.582 [2024-12-07 09:45:08.128193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.582 [2024-12-07 09:45:08.128204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:39.582 [2024-12-07 09:45:08.128216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.582 [2024-12-07 09:45:08.128223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:39.582 [2024-12-07 09:45:08.128483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.582 [2024-12-07 09:45:08.128494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:39.582 [2024-12-07 09:45:08.128505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.582 [2024-12-07 09:45:08.128513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:39.582 [2024-12-07 09:45:08.128762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.582 [2024-12-07 09:45:08.128773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:39.582 [2024-12-07 09:45:08.128786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.582 [2024-12-07 09:45:08.128794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:39.582 passed 00:10:39.582 Test: blockdev nvme passthru rw ...passed 00:10:39.582 Test: blockdev nvme passthru vendor specific ...[2024-12-07 09:45:08.210324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.582 [2024-12-07 09:45:08.210341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:39.582 [2024-12-07 09:45:08.210466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.582 [2024-12-07 09:45:08.210475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:39.582 [2024-12-07 09:45:08.210599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.582 [2024-12-07 09:45:08.210609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:39.582 [2024-12-07 09:45:08.210722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.582 [2024-12-07 09:45:08.210731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:39.582 passed 00:10:39.582 Test: blockdev nvme admin passthru ...passed 00:10:39.582 Test: blockdev copy ...passed 00:10:39.582 00:10:39.582 Run Summary: Type Total Ran Passed Failed Inactive 00:10:39.582 suites 1 1 n/a 0 0 00:10:39.582 tests 23 23 23 0 0 00:10:39.582 asserts 152 152 152 0 n/a 00:10:39.582 00:10:39.582 Elapsed time = 1.276 seconds 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.840 rmmod nvme_tcp 00:10:39.840 rmmod nvme_fabrics 00:10:39.840 rmmod nvme_keyring 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:39.840 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 1127744 ']' 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 1127744 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1127744 ']' 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1127744 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1127744 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1127744' 00:10:39.841 killing process with pid 1127744 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1127744 00:10:39.841 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1127744 00:10:40.099 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:40.099 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:40.099 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:40.099 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:40.099 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:10:40.099 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:40.099 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:10:40.099 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.099 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:40.099 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.099 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.099 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.629 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:42.629 00:10:42.629 real 0m9.339s 00:10:42.629 user 0m10.448s 00:10:42.629 sys 0m4.375s 00:10:42.629 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.629 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.629 ************************************ 00:10:42.629 END TEST nvmf_bdevio 00:10:42.629 ************************************ 00:10:42.629 09:45:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:42.629 00:10:42.629 real 4m27.987s 00:10:42.629 user 10m17.407s 00:10:42.629 sys 1m33.457s 00:10:42.629 09:45:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.629 09:45:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:42.629 ************************************ 00:10:42.629 END TEST nvmf_target_core 00:10:42.629 ************************************ 00:10:42.629 09:45:10 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:42.629 09:45:10 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:42.629 09:45:10 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.629 09:45:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.629 ************************************ 00:10:42.629 START TEST nvmf_target_extra 00:10:42.629 ************************************ 00:10:42.629 09:45:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:42.629 * Looking for test storage... 00:10:42.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:42.629 09:45:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.629 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.630 --rc genhtml_branch_coverage=1 00:10:42.630 --rc genhtml_function_coverage=1 00:10:42.630 --rc genhtml_legend=1 00:10:42.630 --rc geninfo_all_blocks=1 00:10:42.630 --rc geninfo_unexecuted_blocks=1 00:10:42.630 00:10:42.630 ' 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.630 --rc genhtml_branch_coverage=1 00:10:42.630 --rc genhtml_function_coverage=1 00:10:42.630 --rc genhtml_legend=1 00:10:42.630 --rc geninfo_all_blocks=1 00:10:42.630 --rc geninfo_unexecuted_blocks=1 00:10:42.630 00:10:42.630 ' 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.630 --rc genhtml_branch_coverage=1 00:10:42.630 --rc genhtml_function_coverage=1 00:10:42.630 --rc genhtml_legend=1 00:10:42.630 --rc geninfo_all_blocks=1 00:10:42.630 --rc geninfo_unexecuted_blocks=1 00:10:42.630 00:10:42.630 ' 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.630 --rc genhtml_branch_coverage=1 00:10:42.630 --rc genhtml_function_coverage=1 00:10:42.630 --rc genhtml_legend=1 00:10:42.630 --rc geninfo_all_blocks=1 00:10:42.630 --rc geninfo_unexecuted_blocks=1 00:10:42.630 00:10:42.630 ' 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:42.630 09:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:42.631 ************************************ 00:10:42.631 START TEST nvmf_example 00:10:42.631 ************************************ 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:42.631 * Looking for test storage... 00:10:42.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:42.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.631 --rc genhtml_branch_coverage=1 00:10:42.631 --rc genhtml_function_coverage=1 00:10:42.631 --rc genhtml_legend=1 00:10:42.631 --rc geninfo_all_blocks=1 00:10:42.631 --rc geninfo_unexecuted_blocks=1 00:10:42.631 00:10:42.631 ' 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:42.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.631 --rc genhtml_branch_coverage=1 00:10:42.631 --rc genhtml_function_coverage=1 00:10:42.631 --rc genhtml_legend=1 00:10:42.631 --rc geninfo_all_blocks=1 00:10:42.631 --rc geninfo_unexecuted_blocks=1 00:10:42.631 00:10:42.631 ' 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:42.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.631 --rc genhtml_branch_coverage=1 00:10:42.631 --rc genhtml_function_coverage=1 00:10:42.631 --rc genhtml_legend=1 00:10:42.631 --rc geninfo_all_blocks=1 00:10:42.631 --rc geninfo_unexecuted_blocks=1 00:10:42.631 00:10:42.631 ' 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:42.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.631 --rc genhtml_branch_coverage=1 00:10:42.631 --rc genhtml_function_coverage=1 00:10:42.631 --rc genhtml_legend=1 00:10:42.631 --rc geninfo_all_blocks=1 00:10:42.631 --rc geninfo_unexecuted_blocks=1 00:10:42.631 00:10:42.631 ' 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.631 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:42.632 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:42.889 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:49.446 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:49.446 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:49.446 Found net devices under 0000:86:00.0: cvl_0_0 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.446 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:49.447 Found net devices under 0000:86:00.1: cvl_0_1 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.447 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:49.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:10:49.447 00:10:49.447 --- 10.0.0.2 ping statistics --- 00:10:49.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.447 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:10:49.447 00:10:49.447 --- 10.0.0.1 ping statistics --- 00:10:49.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.447 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1132197 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1132197 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1132197 ']' 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:49.447 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:59.402 Initializing NVMe Controllers 00:10:59.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:59.402 Initialization complete. Launching workers. 00:10:59.402 ======================================================== 00:10:59.402 Latency(us) 00:10:59.402 Device Information : IOPS MiB/s Average min max 00:10:59.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18373.49 71.77 3482.72 513.18 15391.76 00:10:59.402 ======================================================== 00:10:59.402 Total : 18373.49 71.77 3482.72 513.18 15391.76 00:10:59.402 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.402 rmmod nvme_tcp 00:10:59.402 rmmod nvme_fabrics 00:10:59.402 rmmod nvme_keyring 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 1132197 ']' 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 1132197 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1132197 ']' 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1132197 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1132197 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1132197' 00:10:59.402 killing process with pid 1132197 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1132197 00:10:59.402 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1132197 00:10:59.402 nvmf threads initialize successfully 00:10:59.402 bdev subsystem init successfully 00:10:59.402 created a nvmf target service 00:10:59.402 create targets's poll groups done 00:10:59.402 all subsystems of target started 00:10:59.402 nvmf target is running 00:10:59.402 all subsystems of target stopped 00:10:59.402 destroy targets's poll groups done 00:10:59.402 destroyed the nvmf target service 00:10:59.402 bdev subsystem finish successfully 00:10:59.402 nvmf threads destroy successfully 00:10:59.402 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:59.402 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:59.402 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:59.403 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:59.403 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:59.403 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:10:59.403 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:10:59.403 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.403 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:59.403 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.403 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.403 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.939 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:01.939 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:01.939 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.939 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.939 00:11:01.939 real 0m18.998s 00:11:01.939 user 0m43.109s 00:11:01.939 sys 0m6.000s 00:11:01.939 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.939 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.939 ************************************ 00:11:01.939 END TEST nvmf_example 00:11:01.939 ************************************ 00:11:01.939 09:45:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:01.939 09:45:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:01.939 09:45:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.939 09:45:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:01.939 ************************************ 00:11:01.939 START TEST nvmf_filesystem 00:11:01.939 ************************************ 00:11:01.939 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:01.939 * Looking for test storage... 00:11:01.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:01.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.940 --rc genhtml_branch_coverage=1 00:11:01.940 --rc genhtml_function_coverage=1 00:11:01.940 --rc genhtml_legend=1 00:11:01.940 --rc geninfo_all_blocks=1 00:11:01.940 --rc geninfo_unexecuted_blocks=1 00:11:01.940 00:11:01.940 ' 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:01.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.940 --rc genhtml_branch_coverage=1 00:11:01.940 --rc genhtml_function_coverage=1 00:11:01.940 --rc genhtml_legend=1 00:11:01.940 --rc geninfo_all_blocks=1 00:11:01.940 --rc geninfo_unexecuted_blocks=1 00:11:01.940 00:11:01.940 ' 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:01.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.940 --rc genhtml_branch_coverage=1 00:11:01.940 --rc genhtml_function_coverage=1 00:11:01.940 --rc genhtml_legend=1 00:11:01.940 --rc geninfo_all_blocks=1 00:11:01.940 --rc geninfo_unexecuted_blocks=1 00:11:01.940 00:11:01.940 ' 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:01.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.940 --rc genhtml_branch_coverage=1 00:11:01.940 --rc genhtml_function_coverage=1 00:11:01.940 --rc genhtml_legend=1 00:11:01.940 --rc geninfo_all_blocks=1 00:11:01.940 --rc geninfo_unexecuted_blocks=1 00:11:01.940 00:11:01.940 ' 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:01.940 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:01.941 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:01.941 #define SPDK_CONFIG_H 00:11:01.941 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:01.941 #define SPDK_CONFIG_APPS 1 00:11:01.941 #define SPDK_CONFIG_ARCH native 00:11:01.941 #undef SPDK_CONFIG_ASAN 00:11:01.941 #undef SPDK_CONFIG_AVAHI 00:11:01.941 #undef SPDK_CONFIG_CET 00:11:01.941 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:01.941 #define SPDK_CONFIG_COVERAGE 1 00:11:01.941 #define SPDK_CONFIG_CROSS_PREFIX 00:11:01.941 #undef SPDK_CONFIG_CRYPTO 00:11:01.941 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:01.941 #undef SPDK_CONFIG_CUSTOMOCF 00:11:01.941 #undef SPDK_CONFIG_DAOS 00:11:01.941 #define SPDK_CONFIG_DAOS_DIR 00:11:01.941 #define SPDK_CONFIG_DEBUG 1 00:11:01.941 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:01.941 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:01.941 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:01.941 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:01.941 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:01.941 #undef SPDK_CONFIG_DPDK_UADK 00:11:01.941 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:01.941 #define SPDK_CONFIG_EXAMPLES 1 00:11:01.941 #undef SPDK_CONFIG_FC 00:11:01.941 #define SPDK_CONFIG_FC_PATH 00:11:01.941 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:01.941 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:01.941 #define SPDK_CONFIG_FSDEV 1 00:11:01.941 #undef SPDK_CONFIG_FUSE 00:11:01.941 #undef SPDK_CONFIG_FUZZER 00:11:01.941 #define SPDK_CONFIG_FUZZER_LIB 00:11:01.941 #undef SPDK_CONFIG_GOLANG 00:11:01.941 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:01.941 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:01.941 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:01.941 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:01.941 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:01.941 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:01.941 #undef SPDK_CONFIG_HAVE_LZ4 00:11:01.941 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:01.941 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:01.941 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:01.941 #define SPDK_CONFIG_IDXD 1 00:11:01.941 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:01.941 #undef SPDK_CONFIG_IPSEC_MB 00:11:01.941 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:01.941 #define SPDK_CONFIG_ISAL 1 00:11:01.941 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:01.941 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:01.941 #define SPDK_CONFIG_LIBDIR 00:11:01.941 #undef SPDK_CONFIG_LTO 00:11:01.941 #define SPDK_CONFIG_MAX_LCORES 128 00:11:01.941 #define SPDK_CONFIG_NVME_CUSE 1 00:11:01.941 #undef SPDK_CONFIG_OCF 00:11:01.941 #define SPDK_CONFIG_OCF_PATH 00:11:01.941 #define SPDK_CONFIG_OPENSSL_PATH 00:11:01.941 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:01.941 #define SPDK_CONFIG_PGO_DIR 00:11:01.941 #undef SPDK_CONFIG_PGO_USE 00:11:01.941 #define SPDK_CONFIG_PREFIX /usr/local 00:11:01.941 #undef SPDK_CONFIG_RAID5F 00:11:01.941 #undef SPDK_CONFIG_RBD 00:11:01.941 #define SPDK_CONFIG_RDMA 1 00:11:01.941 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:01.941 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:01.941 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:01.941 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:01.941 #define SPDK_CONFIG_SHARED 1 00:11:01.941 #undef SPDK_CONFIG_SMA 00:11:01.941 #define SPDK_CONFIG_TESTS 1 00:11:01.941 #undef SPDK_CONFIG_TSAN 00:11:01.941 #define SPDK_CONFIG_UBLK 1 00:11:01.941 #define SPDK_CONFIG_UBSAN 1 00:11:01.941 #undef SPDK_CONFIG_UNIT_TESTS 00:11:01.941 #undef SPDK_CONFIG_URING 00:11:01.941 #define SPDK_CONFIG_URING_PATH 00:11:01.941 #undef SPDK_CONFIG_URING_ZNS 00:11:01.941 #undef SPDK_CONFIG_USDT 00:11:01.941 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:01.941 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:01.941 #define SPDK_CONFIG_VFIO_USER 1 00:11:01.941 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:01.941 #define SPDK_CONFIG_VHOST 1 00:11:01.941 #define SPDK_CONFIG_VIRTIO 1 00:11:01.941 #undef SPDK_CONFIG_VTUNE 00:11:01.942 #define SPDK_CONFIG_VTUNE_DIR 00:11:01.942 #define SPDK_CONFIG_WERROR 1 00:11:01.942 #define SPDK_CONFIG_WPDK_DIR 00:11:01.942 #undef SPDK_CONFIG_XNVME 00:11:01.942 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:01.942 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:01.943 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1134382 ]] 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1134382 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.sCVbid 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.sCVbid/tests/target /tmp/spdk.sCVbid 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:01.944 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=187788263424 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963961344 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=8175697920 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97970614272 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981980672 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169748992 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192793088 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23044096 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97980215296 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981980672 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1765376 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:01.945 * Looking for test storage... 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=187788263424 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=10390290432 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.945 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:01.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.945 --rc genhtml_branch_coverage=1 00:11:01.945 --rc genhtml_function_coverage=1 00:11:01.945 --rc genhtml_legend=1 00:11:01.945 --rc geninfo_all_blocks=1 00:11:01.945 --rc geninfo_unexecuted_blocks=1 00:11:01.945 00:11:01.945 ' 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:01.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.946 --rc genhtml_branch_coverage=1 00:11:01.946 --rc genhtml_function_coverage=1 00:11:01.946 --rc genhtml_legend=1 00:11:01.946 --rc geninfo_all_blocks=1 00:11:01.946 --rc geninfo_unexecuted_blocks=1 00:11:01.946 00:11:01.946 ' 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:01.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.946 --rc genhtml_branch_coverage=1 00:11:01.946 --rc genhtml_function_coverage=1 00:11:01.946 --rc genhtml_legend=1 00:11:01.946 --rc geninfo_all_blocks=1 00:11:01.946 --rc geninfo_unexecuted_blocks=1 00:11:01.946 00:11:01.946 ' 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:01.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.946 --rc genhtml_branch_coverage=1 00:11:01.946 --rc genhtml_function_coverage=1 00:11:01.946 --rc genhtml_legend=1 00:11:01.946 --rc geninfo_all_blocks=1 00:11:01.946 --rc geninfo_unexecuted_blocks=1 00:11:01.946 00:11:01.946 ' 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.946 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:07.422 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:07.422 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:07.422 Found net devices under 0000:86:00.0: cvl_0_0 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:07.422 Found net devices under 0000:86:00.1: cvl_0_1 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.422 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:07.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:11:07.423 00:11:07.423 --- 10.0.0.2 ping statistics --- 00:11:07.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.423 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:11:07.423 00:11:07.423 --- 10.0.0.1 ping statistics --- 00:11:07.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.423 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:07.423 ************************************ 00:11:07.423 START TEST nvmf_filesystem_no_in_capsule 00:11:07.423 ************************************ 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=1137415 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 1137415 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1137415 ']' 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.423 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.423 [2024-12-07 09:45:35.960083] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:07.423 [2024-12-07 09:45:35.960124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.423 [2024-12-07 09:45:36.022318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.423 [2024-12-07 09:45:36.065122] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.423 [2024-12-07 09:45:36.065162] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.423 [2024-12-07 09:45:36.065169] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.423 [2024-12-07 09:45:36.065176] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.423 [2024-12-07 09:45:36.065182] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.423 [2024-12-07 09:45:36.065225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.423 [2024-12-07 09:45:36.065246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.423 [2024-12-07 09:45:36.065313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.423 [2024-12-07 09:45:36.065314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.714 [2024-12-07 09:45:36.219184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.714 Malloc1 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.714 [2024-12-07 09:45:36.362500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:07.714 { 00:11:07.714 "name": "Malloc1", 00:11:07.714 "aliases": [ 00:11:07.714 "34a55bc3-0d2f-4e2c-a7c5-1948ed0b7a73" 00:11:07.714 ], 00:11:07.714 "product_name": "Malloc disk", 00:11:07.714 "block_size": 512, 00:11:07.714 "num_blocks": 1048576, 00:11:07.714 "uuid": "34a55bc3-0d2f-4e2c-a7c5-1948ed0b7a73", 00:11:07.714 "assigned_rate_limits": { 00:11:07.714 "rw_ios_per_sec": 0, 00:11:07.714 "rw_mbytes_per_sec": 0, 00:11:07.714 "r_mbytes_per_sec": 0, 00:11:07.714 "w_mbytes_per_sec": 0 00:11:07.714 }, 00:11:07.714 "claimed": true, 00:11:07.714 "claim_type": "exclusive_write", 00:11:07.714 "zoned": false, 00:11:07.714 "supported_io_types": { 00:11:07.714 "read": true, 00:11:07.714 "write": true, 00:11:07.714 "unmap": true, 00:11:07.714 "flush": true, 00:11:07.714 "reset": true, 00:11:07.714 "nvme_admin": false, 00:11:07.714 "nvme_io": false, 00:11:07.714 "nvme_io_md": false, 00:11:07.714 "write_zeroes": true, 00:11:07.714 "zcopy": true, 00:11:07.714 "get_zone_info": false, 00:11:07.714 "zone_management": false, 00:11:07.714 "zone_append": false, 00:11:07.714 "compare": false, 00:11:07.714 "compare_and_write": false, 00:11:07.714 "abort": true, 00:11:07.714 "seek_hole": false, 00:11:07.714 "seek_data": false, 00:11:07.714 "copy": true, 00:11:07.714 "nvme_iov_md": false 00:11:07.714 }, 00:11:07.714 "memory_domains": [ 00:11:07.714 { 00:11:07.714 "dma_device_id": "system", 00:11:07.714 "dma_device_type": 1 00:11:07.714 }, 00:11:07.714 { 00:11:07.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.714 "dma_device_type": 2 00:11:07.714 } 00:11:07.714 ], 00:11:07.714 "driver_specific": {} 00:11:07.714 } 00:11:07.714 ]' 00:11:07.714 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:08.008 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:08.008 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:08.008 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:08.008 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:08.008 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:08.008 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:08.008 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:08.954 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.954 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:08.954 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.954 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:08.954 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:10.865 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:10.865 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:11.123 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:11.381 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:11.639 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.573 ************************************ 00:11:12.573 START TEST filesystem_ext4 00:11:12.573 ************************************ 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:12.573 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:12.573 mke2fs 1.47.0 (5-Feb-2023) 00:11:12.573 Discarding device blocks: 0/522240 done 00:11:12.831 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:12.831 Filesystem UUID: d8a84ebd-94f9-4639-8596-f98ebecd869f 00:11:12.831 Superblock backups stored on blocks: 00:11:12.831 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:12.831 00:11:12.831 Allocating group tables: 0/64 done 00:11:12.831 Writing inode tables: 0/64 done 00:11:13.397 Creating journal (8192 blocks): done 00:11:15.595 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:11:15.595 00:11:15.595 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:15.595 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1137415 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:22.152 00:11:22.152 real 0m9.149s 00:11:22.152 user 0m0.029s 00:11:22.152 sys 0m0.074s 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:22.152 ************************************ 00:11:22.152 END TEST filesystem_ext4 00:11:22.152 ************************************ 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.152 ************************************ 00:11:22.152 START TEST filesystem_btrfs 00:11:22.152 ************************************ 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:22.152 btrfs-progs v6.8.1 00:11:22.152 See https://btrfs.readthedocs.io for more information. 00:11:22.152 00:11:22.152 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:22.152 NOTE: several default settings have changed in version 5.15, please make sure 00:11:22.152 this does not affect your deployments: 00:11:22.152 - DUP for metadata (-m dup) 00:11:22.152 - enabled no-holes (-O no-holes) 00:11:22.152 - enabled free-space-tree (-R free-space-tree) 00:11:22.152 00:11:22.152 Label: (null) 00:11:22.152 UUID: c223e02a-c5fd-4998-9f18-4cc1ba1e65c8 00:11:22.152 Node size: 16384 00:11:22.152 Sector size: 4096 (CPU page size: 4096) 00:11:22.152 Filesystem size: 510.00MiB 00:11:22.152 Block group profiles: 00:11:22.152 Data: single 8.00MiB 00:11:22.152 Metadata: DUP 32.00MiB 00:11:22.152 System: DUP 8.00MiB 00:11:22.152 SSD detected: yes 00:11:22.152 Zoned device: no 00:11:22.152 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:22.152 Checksum: crc32c 00:11:22.152 Number of devices: 1 00:11:22.152 Devices: 00:11:22.152 ID SIZE PATH 00:11:22.152 1 510.00MiB /dev/nvme0n1p1 00:11:22.152 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:22.152 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1137415 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:23.083 00:11:23.083 real 0m1.202s 00:11:23.083 user 0m0.025s 00:11:23.083 sys 0m0.118s 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:23.083 ************************************ 00:11:23.083 END TEST filesystem_btrfs 00:11:23.083 ************************************ 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.083 ************************************ 00:11:23.083 START TEST filesystem_xfs 00:11:23.083 ************************************ 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:23.083 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:23.083 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:23.083 = sectsz=512 attr=2, projid32bit=1 00:11:23.083 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:23.083 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:23.083 data = bsize=4096 blocks=130560, imaxpct=25 00:11:23.083 = sunit=0 swidth=0 blks 00:11:23.083 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:23.083 log =internal log bsize=4096 blocks=16384, version=2 00:11:23.083 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:23.083 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:24.011 Discarding blocks...Done. 00:11:24.011 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:24.011 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1137415 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.906 00:11:25.906 real 0m2.860s 00:11:25.906 user 0m0.024s 00:11:25.906 sys 0m0.072s 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:25.906 ************************************ 00:11:25.906 END TEST filesystem_xfs 00:11:25.906 ************************************ 00:11:25.906 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1137415 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1137415 ']' 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1137415 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1137415 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1137415' 00:11:26.164 killing process with pid 1137415 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1137415 00:11:26.164 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1137415 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:26.729 00:11:26.729 real 0m19.265s 00:11:26.729 user 1m15.891s 00:11:26.729 sys 0m1.424s 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.729 ************************************ 00:11:26.729 END TEST nvmf_filesystem_no_in_capsule 00:11:26.729 ************************************ 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.729 ************************************ 00:11:26.729 START TEST nvmf_filesystem_in_capsule 00:11:26.729 ************************************ 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=1140867 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 1140867 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1140867 ']' 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.729 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.729 [2024-12-07 09:45:55.305117] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:26.729 [2024-12-07 09:45:55.305165] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.729 [2024-12-07 09:45:55.368000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.729 [2024-12-07 09:45:55.407964] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.729 [2024-12-07 09:45:55.408006] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.729 [2024-12-07 09:45:55.408017] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.729 [2024-12-07 09:45:55.408023] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.729 [2024-12-07 09:45:55.408028] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.729 [2024-12-07 09:45:55.408096] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.729 [2024-12-07 09:45:55.408192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.729 [2024-12-07 09:45:55.408290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.729 [2024-12-07 09:45:55.408291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.986 [2024-12-07 09:45:55.557254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.986 Malloc1 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.986 [2024-12-07 09:45:55.704956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.986 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:27.243 { 00:11:27.243 "name": "Malloc1", 00:11:27.243 "aliases": [ 00:11:27.243 "49bcb822-069e-44fc-add1-8fda669ce866" 00:11:27.243 ], 00:11:27.243 "product_name": "Malloc disk", 00:11:27.243 "block_size": 512, 00:11:27.243 "num_blocks": 1048576, 00:11:27.243 "uuid": "49bcb822-069e-44fc-add1-8fda669ce866", 00:11:27.243 "assigned_rate_limits": { 00:11:27.243 "rw_ios_per_sec": 0, 00:11:27.243 "rw_mbytes_per_sec": 0, 00:11:27.243 "r_mbytes_per_sec": 0, 00:11:27.243 "w_mbytes_per_sec": 0 00:11:27.243 }, 00:11:27.243 "claimed": true, 00:11:27.243 "claim_type": "exclusive_write", 00:11:27.243 "zoned": false, 00:11:27.243 "supported_io_types": { 00:11:27.243 "read": true, 00:11:27.243 "write": true, 00:11:27.243 "unmap": true, 00:11:27.243 "flush": true, 00:11:27.243 "reset": true, 00:11:27.243 "nvme_admin": false, 00:11:27.243 "nvme_io": false, 00:11:27.243 "nvme_io_md": false, 00:11:27.243 "write_zeroes": true, 00:11:27.243 "zcopy": true, 00:11:27.243 "get_zone_info": false, 00:11:27.243 "zone_management": false, 00:11:27.243 "zone_append": false, 00:11:27.243 "compare": false, 00:11:27.243 "compare_and_write": false, 00:11:27.243 "abort": true, 00:11:27.243 "seek_hole": false, 00:11:27.243 "seek_data": false, 00:11:27.243 "copy": true, 00:11:27.243 "nvme_iov_md": false 00:11:27.243 }, 00:11:27.243 "memory_domains": [ 00:11:27.243 { 00:11:27.243 "dma_device_id": "system", 00:11:27.243 "dma_device_type": 1 00:11:27.243 }, 00:11:27.243 { 00:11:27.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.243 "dma_device_type": 2 00:11:27.243 } 00:11:27.243 ], 00:11:27.243 "driver_specific": {} 00:11:27.243 } 00:11:27.243 ]' 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:27.243 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:28.610 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.610 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:28.610 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.610 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:28.610 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:30.499 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:31.062 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:31.318 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:32.249 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:32.249 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:32.249 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:32.249 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.249 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.506 ************************************ 00:11:32.506 START TEST filesystem_in_capsule_ext4 00:11:32.506 ************************************ 00:11:32.506 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:32.506 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:32.506 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:32.506 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:32.506 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:32.506 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:32.506 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:32.506 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:32.506 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:32.506 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:32.506 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:32.506 mke2fs 1.47.0 (5-Feb-2023) 00:11:32.506 Discarding device blocks: 0/522240 done 00:11:32.506 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:32.506 Filesystem UUID: 53b29e84-bd61-4776-b1ed-fa6e8c53c6ec 00:11:32.506 Superblock backups stored on blocks: 00:11:32.506 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:32.506 00:11:32.506 Allocating group tables: 0/64 done 00:11:32.506 Writing inode tables: 0/64 done 00:11:33.437 Creating journal (8192 blocks): done 00:11:33.437 Writing superblocks and filesystem accounting information: 0/64 done 00:11:33.437 00:11:33.437 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:33.437 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1140867 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:40.007 00:11:40.007 real 0m7.160s 00:11:40.007 user 0m0.029s 00:11:40.007 sys 0m0.071s 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:40.007 ************************************ 00:11:40.007 END TEST filesystem_in_capsule_ext4 00:11:40.007 ************************************ 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.007 ************************************ 00:11:40.007 START TEST filesystem_in_capsule_btrfs 00:11:40.007 ************************************ 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:40.007 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:40.007 btrfs-progs v6.8.1 00:11:40.007 See https://btrfs.readthedocs.io for more information. 00:11:40.007 00:11:40.007 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:40.007 NOTE: several default settings have changed in version 5.15, please make sure 00:11:40.007 this does not affect your deployments: 00:11:40.007 - DUP for metadata (-m dup) 00:11:40.007 - enabled no-holes (-O no-holes) 00:11:40.007 - enabled free-space-tree (-R free-space-tree) 00:11:40.007 00:11:40.007 Label: (null) 00:11:40.007 UUID: d8e2d8b8-b3d5-4f9d-a9eb-cc693f09d1f6 00:11:40.007 Node size: 16384 00:11:40.007 Sector size: 4096 (CPU page size: 4096) 00:11:40.007 Filesystem size: 510.00MiB 00:11:40.007 Block group profiles: 00:11:40.007 Data: single 8.00MiB 00:11:40.007 Metadata: DUP 32.00MiB 00:11:40.007 System: DUP 8.00MiB 00:11:40.007 SSD detected: yes 00:11:40.007 Zoned device: no 00:11:40.008 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:40.008 Checksum: crc32c 00:11:40.008 Number of devices: 1 00:11:40.008 Devices: 00:11:40.008 ID SIZE PATH 00:11:40.008 1 510.00MiB /dev/nvme0n1p1 00:11:40.008 00:11:40.008 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:40.008 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:40.265 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:40.265 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:40.265 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:40.265 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:40.265 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:40.265 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:40.265 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1140867 00:11:40.265 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:40.265 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:40.265 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:40.265 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:40.266 00:11:40.266 real 0m0.630s 00:11:40.266 user 0m0.033s 00:11:40.266 sys 0m0.110s 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:40.266 ************************************ 00:11:40.266 END TEST filesystem_in_capsule_btrfs 00:11:40.266 ************************************ 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.266 ************************************ 00:11:40.266 START TEST filesystem_in_capsule_xfs 00:11:40.266 ************************************ 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:40.266 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:40.523 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:40.523 = sectsz=512 attr=2, projid32bit=1 00:11:40.523 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:40.523 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:40.523 data = bsize=4096 blocks=130560, imaxpct=25 00:11:40.523 = sunit=0 swidth=0 blks 00:11:40.523 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:40.523 log =internal log bsize=4096 blocks=16384, version=2 00:11:40.523 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:40.523 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:41.456 Discarding blocks...Done. 00:11:41.456 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:41.456 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1140867 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.353 00:11:43.353 real 0m2.711s 00:11:43.353 user 0m0.026s 00:11:43.353 sys 0m0.072s 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:43.353 ************************************ 00:11:43.353 END TEST filesystem_in_capsule_xfs 00:11:43.353 ************************************ 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1140867 00:11:43.353 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1140867 ']' 00:11:43.354 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1140867 00:11:43.354 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:43.354 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:43.354 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1140867 00:11:43.354 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:43.354 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:43.354 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1140867' 00:11:43.354 killing process with pid 1140867 00:11:43.354 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1140867 00:11:43.354 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1140867 00:11:43.612 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:43.613 00:11:43.613 real 0m17.080s 00:11:43.613 user 1m7.195s 00:11:43.613 sys 0m1.420s 00:11:43.613 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.613 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.613 ************************************ 00:11:43.613 END TEST nvmf_filesystem_in_capsule 00:11:43.613 ************************************ 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.871 rmmod nvme_tcp 00:11:43.871 rmmod nvme_fabrics 00:11:43.871 rmmod nvme_keyring 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.871 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:46.405 00:11:46.405 real 0m44.293s 00:11:46.405 user 2m24.782s 00:11:46.405 sys 0m7.085s 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.405 ************************************ 00:11:46.405 END TEST nvmf_filesystem 00:11:46.405 ************************************ 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.405 ************************************ 00:11:46.405 START TEST nvmf_target_discovery 00:11:46.405 ************************************ 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:46.405 * Looking for test storage... 00:11:46.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.405 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:46.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.406 --rc genhtml_branch_coverage=1 00:11:46.406 --rc genhtml_function_coverage=1 00:11:46.406 --rc genhtml_legend=1 00:11:46.406 --rc geninfo_all_blocks=1 00:11:46.406 --rc geninfo_unexecuted_blocks=1 00:11:46.406 00:11:46.406 ' 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:46.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.406 --rc genhtml_branch_coverage=1 00:11:46.406 --rc genhtml_function_coverage=1 00:11:46.406 --rc genhtml_legend=1 00:11:46.406 --rc geninfo_all_blocks=1 00:11:46.406 --rc geninfo_unexecuted_blocks=1 00:11:46.406 00:11:46.406 ' 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:46.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.406 --rc genhtml_branch_coverage=1 00:11:46.406 --rc genhtml_function_coverage=1 00:11:46.406 --rc genhtml_legend=1 00:11:46.406 --rc geninfo_all_blocks=1 00:11:46.406 --rc geninfo_unexecuted_blocks=1 00:11:46.406 00:11:46.406 ' 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:46.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.406 --rc genhtml_branch_coverage=1 00:11:46.406 --rc genhtml_function_coverage=1 00:11:46.406 --rc genhtml_legend=1 00:11:46.406 --rc geninfo_all_blocks=1 00:11:46.406 --rc geninfo_unexecuted_blocks=1 00:11:46.406 00:11:46.406 ' 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.406 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.407 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.661 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.661 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.661 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.661 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.661 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.661 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.661 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:51.662 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:51.662 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:51.662 Found net devices under 0000:86:00.0: cvl_0_0 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:51.662 Found net devices under 0000:86:00.1: cvl_0_1 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:11:51.662 00:11:51.662 --- 10.0.0.2 ping statistics --- 00:11:51.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.662 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:11:51.662 00:11:51.662 --- 10.0.0.1 ping statistics --- 00:11:51.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.662 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:51.662 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=1147371 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 1147371 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1147371 ']' 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:51.663 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.663 [2024-12-07 09:46:20.383427] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:51.663 [2024-12-07 09:46:20.383472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.920 [2024-12-07 09:46:20.443144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.920 [2024-12-07 09:46:20.484425] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.920 [2024-12-07 09:46:20.484468] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.920 [2024-12-07 09:46:20.484476] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.920 [2024-12-07 09:46:20.484482] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.920 [2024-12-07 09:46:20.484486] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.920 [2024-12-07 09:46:20.484582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.920 [2024-12-07 09:46:20.484682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.920 [2024-12-07 09:46:20.484750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.920 [2024-12-07 09:46:20.484751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.920 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.920 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:51.920 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:51.920 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.920 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.920 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.920 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.920 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.920 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.920 [2024-12-07 09:46:20.642349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.177 Null1 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.177 [2024-12-07 09:46:20.690665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.177 Null2 00:11:52.177 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 Null3 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 Null4 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.178 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:52.436 00:11:52.436 Discovery Log Number of Records 6, Generation counter 6 00:11:52.436 =====Discovery Log Entry 0====== 00:11:52.436 trtype: tcp 00:11:52.436 adrfam: ipv4 00:11:52.436 subtype: current discovery subsystem 00:11:52.436 treq: not required 00:11:52.436 portid: 0 00:11:52.436 trsvcid: 4420 00:11:52.436 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:52.436 traddr: 10.0.0.2 00:11:52.436 eflags: explicit discovery connections, duplicate discovery information 00:11:52.436 sectype: none 00:11:52.436 =====Discovery Log Entry 1====== 00:11:52.436 trtype: tcp 00:11:52.436 adrfam: ipv4 00:11:52.436 subtype: nvme subsystem 00:11:52.436 treq: not required 00:11:52.436 portid: 0 00:11:52.436 trsvcid: 4420 00:11:52.436 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:52.436 traddr: 10.0.0.2 00:11:52.436 eflags: none 00:11:52.436 sectype: none 00:11:52.436 =====Discovery Log Entry 2====== 00:11:52.436 trtype: tcp 00:11:52.436 adrfam: ipv4 00:11:52.436 subtype: nvme subsystem 00:11:52.436 treq: not required 00:11:52.436 portid: 0 00:11:52.436 trsvcid: 4420 00:11:52.436 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:52.436 traddr: 10.0.0.2 00:11:52.436 eflags: none 00:11:52.436 sectype: none 00:11:52.436 =====Discovery Log Entry 3====== 00:11:52.436 trtype: tcp 00:11:52.436 adrfam: ipv4 00:11:52.436 subtype: nvme subsystem 00:11:52.436 treq: not required 00:11:52.436 portid: 0 00:11:52.436 trsvcid: 4420 00:11:52.436 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:52.436 traddr: 10.0.0.2 00:11:52.436 eflags: none 00:11:52.436 sectype: none 00:11:52.436 =====Discovery Log Entry 4====== 00:11:52.436 trtype: tcp 00:11:52.436 adrfam: ipv4 00:11:52.436 subtype: nvme subsystem 00:11:52.436 treq: not required 00:11:52.436 portid: 0 00:11:52.436 trsvcid: 4420 00:11:52.436 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:52.436 traddr: 10.0.0.2 00:11:52.436 eflags: none 00:11:52.436 sectype: none 00:11:52.436 =====Discovery Log Entry 5====== 00:11:52.436 trtype: tcp 00:11:52.436 adrfam: ipv4 00:11:52.436 subtype: discovery subsystem referral 00:11:52.436 treq: not required 00:11:52.436 portid: 0 00:11:52.436 trsvcid: 4430 00:11:52.436 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:52.436 traddr: 10.0.0.2 00:11:52.436 eflags: none 00:11:52.436 sectype: none 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:52.436 Perform nvmf subsystem discovery via RPC 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.436 [ 00:11:52.436 { 00:11:52.436 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:52.436 "subtype": "Discovery", 00:11:52.436 "listen_addresses": [ 00:11:52.436 { 00:11:52.436 "trtype": "TCP", 00:11:52.436 "adrfam": "IPv4", 00:11:52.436 "traddr": "10.0.0.2", 00:11:52.436 "trsvcid": "4420" 00:11:52.436 } 00:11:52.436 ], 00:11:52.436 "allow_any_host": true, 00:11:52.436 "hosts": [] 00:11:52.436 }, 00:11:52.436 { 00:11:52.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.436 "subtype": "NVMe", 00:11:52.436 "listen_addresses": [ 00:11:52.436 { 00:11:52.436 "trtype": "TCP", 00:11:52.436 "adrfam": "IPv4", 00:11:52.436 "traddr": "10.0.0.2", 00:11:52.436 "trsvcid": "4420" 00:11:52.436 } 00:11:52.436 ], 00:11:52.436 "allow_any_host": true, 00:11:52.436 "hosts": [], 00:11:52.436 "serial_number": "SPDK00000000000001", 00:11:52.436 "model_number": "SPDK bdev Controller", 00:11:52.436 "max_namespaces": 32, 00:11:52.436 "min_cntlid": 1, 00:11:52.436 "max_cntlid": 65519, 00:11:52.436 "namespaces": [ 00:11:52.436 { 00:11:52.436 "nsid": 1, 00:11:52.436 "bdev_name": "Null1", 00:11:52.436 "name": "Null1", 00:11:52.436 "nguid": "1F928597F93545E2AB369772CB7F7722", 00:11:52.436 "uuid": "1f928597-f935-45e2-ab36-9772cb7f7722" 00:11:52.436 } 00:11:52.436 ] 00:11:52.436 }, 00:11:52.436 { 00:11:52.436 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:52.436 "subtype": "NVMe", 00:11:52.436 "listen_addresses": [ 00:11:52.436 { 00:11:52.436 "trtype": "TCP", 00:11:52.436 "adrfam": "IPv4", 00:11:52.436 "traddr": "10.0.0.2", 00:11:52.436 "trsvcid": "4420" 00:11:52.436 } 00:11:52.436 ], 00:11:52.436 "allow_any_host": true, 00:11:52.436 "hosts": [], 00:11:52.436 "serial_number": "SPDK00000000000002", 00:11:52.436 "model_number": "SPDK bdev Controller", 00:11:52.436 "max_namespaces": 32, 00:11:52.436 "min_cntlid": 1, 00:11:52.436 "max_cntlid": 65519, 00:11:52.436 "namespaces": [ 00:11:52.436 { 00:11:52.436 "nsid": 1, 00:11:52.436 "bdev_name": "Null2", 00:11:52.436 "name": "Null2", 00:11:52.436 "nguid": "CFD80C431C4045F28AF79FCD1D9C703F", 00:11:52.436 "uuid": "cfd80c43-1c40-45f2-8af7-9fcd1d9c703f" 00:11:52.436 } 00:11:52.436 ] 00:11:52.436 }, 00:11:52.436 { 00:11:52.436 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:52.436 "subtype": "NVMe", 00:11:52.436 "listen_addresses": [ 00:11:52.436 { 00:11:52.436 "trtype": "TCP", 00:11:52.436 "adrfam": "IPv4", 00:11:52.436 "traddr": "10.0.0.2", 00:11:52.436 "trsvcid": "4420" 00:11:52.436 } 00:11:52.436 ], 00:11:52.436 "allow_any_host": true, 00:11:52.436 "hosts": [], 00:11:52.436 "serial_number": "SPDK00000000000003", 00:11:52.436 "model_number": "SPDK bdev Controller", 00:11:52.436 "max_namespaces": 32, 00:11:52.436 "min_cntlid": 1, 00:11:52.436 "max_cntlid": 65519, 00:11:52.436 "namespaces": [ 00:11:52.436 { 00:11:52.436 "nsid": 1, 00:11:52.436 "bdev_name": "Null3", 00:11:52.436 "name": "Null3", 00:11:52.436 "nguid": "307A4833B6D1486C93D85EA6048349A4", 00:11:52.436 "uuid": "307a4833-b6d1-486c-93d8-5ea6048349a4" 00:11:52.436 } 00:11:52.436 ] 00:11:52.436 }, 00:11:52.436 { 00:11:52.436 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:52.436 "subtype": "NVMe", 00:11:52.436 "listen_addresses": [ 00:11:52.436 { 00:11:52.436 "trtype": "TCP", 00:11:52.436 "adrfam": "IPv4", 00:11:52.436 "traddr": "10.0.0.2", 00:11:52.436 "trsvcid": "4420" 00:11:52.436 } 00:11:52.436 ], 00:11:52.436 "allow_any_host": true, 00:11:52.436 "hosts": [], 00:11:52.436 "serial_number": "SPDK00000000000004", 00:11:52.436 "model_number": "SPDK bdev Controller", 00:11:52.436 "max_namespaces": 32, 00:11:52.436 "min_cntlid": 1, 00:11:52.436 "max_cntlid": 65519, 00:11:52.436 "namespaces": [ 00:11:52.436 { 00:11:52.436 "nsid": 1, 00:11:52.436 "bdev_name": "Null4", 00:11:52.436 "name": "Null4", 00:11:52.436 "nguid": "E7336A7C23824945BD1CF002FD30CA9D", 00:11:52.436 "uuid": "e7336a7c-2382-4945-bd1c-f002fd30ca9d" 00:11:52.436 } 00:11:52.436 ] 00:11:52.436 } 00:11:52.436 ] 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:52.436 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.437 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.694 rmmod nvme_tcp 00:11:52.694 rmmod nvme_fabrics 00:11:52.694 rmmod nvme_keyring 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 1147371 ']' 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 1147371 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1147371 ']' 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1147371 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1147371 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1147371' 00:11:52.694 killing process with pid 1147371 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1147371 00:11:52.694 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1147371 00:11:52.952 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:52.952 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:52.952 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:52.952 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:52.952 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:11:52.952 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:52.952 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:11:52.952 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.952 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:52.952 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.952 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.952 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.852 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:54.852 00:11:54.853 real 0m8.976s 00:11:54.853 user 0m5.689s 00:11:54.853 sys 0m4.509s 00:11:54.853 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.853 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.853 ************************************ 00:11:54.853 END TEST nvmf_target_discovery 00:11:54.853 ************************************ 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.111 ************************************ 00:11:55.111 START TEST nvmf_referrals 00:11:55.111 ************************************ 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:55.111 * Looking for test storage... 00:11:55.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.111 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:55.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.112 --rc genhtml_branch_coverage=1 00:11:55.112 --rc genhtml_function_coverage=1 00:11:55.112 --rc genhtml_legend=1 00:11:55.112 --rc geninfo_all_blocks=1 00:11:55.112 --rc geninfo_unexecuted_blocks=1 00:11:55.112 00:11:55.112 ' 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:55.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.112 --rc genhtml_branch_coverage=1 00:11:55.112 --rc genhtml_function_coverage=1 00:11:55.112 --rc genhtml_legend=1 00:11:55.112 --rc geninfo_all_blocks=1 00:11:55.112 --rc geninfo_unexecuted_blocks=1 00:11:55.112 00:11:55.112 ' 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:55.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.112 --rc genhtml_branch_coverage=1 00:11:55.112 --rc genhtml_function_coverage=1 00:11:55.112 --rc genhtml_legend=1 00:11:55.112 --rc geninfo_all_blocks=1 00:11:55.112 --rc geninfo_unexecuted_blocks=1 00:11:55.112 00:11:55.112 ' 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:55.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.112 --rc genhtml_branch_coverage=1 00:11:55.112 --rc genhtml_function_coverage=1 00:11:55.112 --rc genhtml_legend=1 00:11:55.112 --rc geninfo_all_blocks=1 00:11:55.112 --rc geninfo_unexecuted_blocks=1 00:11:55.112 00:11:55.112 ' 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.112 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.370 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:55.370 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:55.370 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:55.370 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:55.370 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:55.370 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.371 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:00.634 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:00.635 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:00.635 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:00.635 Found net devices under 0000:86:00.0: cvl_0_0 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:00.635 Found net devices under 0000:86:00.1: cvl_0_1 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:00.635 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:00.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:12:00.635 00:12:00.635 --- 10.0.0.2 ping statistics --- 00:12:00.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.635 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:12:00.635 00:12:00.635 --- 10.0.0.1 ping statistics --- 00:12:00.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.635 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=1150938 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 1150938 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1150938 ']' 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.635 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.635 [2024-12-07 09:46:29.296769] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:00.635 [2024-12-07 09:46:29.296815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.635 [2024-12-07 09:46:29.353870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.894 [2024-12-07 09:46:29.396449] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.894 [2024-12-07 09:46:29.396489] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.894 [2024-12-07 09:46:29.396496] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.894 [2024-12-07 09:46:29.396502] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.894 [2024-12-07 09:46:29.396508] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.894 [2024-12-07 09:46:29.396550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.894 [2024-12-07 09:46:29.396647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.894 [2024-12-07 09:46:29.396736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.894 [2024-12-07 09:46:29.396738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.894 [2024-12-07 09:46:29.532786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.894 [2024-12-07 09:46:29.545004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.894 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.895 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:01.152 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:01.152 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:01.152 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.152 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:01.152 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.152 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:01.152 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.152 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.152 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:01.152 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:01.153 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.411 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.411 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.411 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.411 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.411 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.669 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:01.669 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:01.669 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:01.669 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:01.669 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:01.669 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.669 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:01.926 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:01.926 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:01.926 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:01.926 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:01.926 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.926 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:02.183 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:02.438 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:02.438 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:02.438 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:02.438 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:02.438 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:02.438 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.438 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:02.438 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:02.438 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:02.438 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:02.438 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:02.438 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.438 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:02.694 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.951 rmmod nvme_tcp 00:12:02.951 rmmod nvme_fabrics 00:12:02.951 rmmod nvme_keyring 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 1150938 ']' 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 1150938 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1150938 ']' 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1150938 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1150938 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1150938' 00:12:02.951 killing process with pid 1150938 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1150938 00:12:02.951 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1150938 00:12:03.209 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:03.209 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:03.209 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:03.209 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:03.209 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:12:03.209 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:12:03.209 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:03.209 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.209 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.209 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.209 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.209 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.738 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.738 00:12:05.738 real 0m10.242s 00:12:05.738 user 0m11.805s 00:12:05.738 sys 0m4.846s 00:12:05.738 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.738 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.738 ************************************ 00:12:05.738 END TEST nvmf_referrals 00:12:05.738 ************************************ 00:12:05.738 09:46:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:05.738 09:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:05.738 09:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.738 09:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.738 ************************************ 00:12:05.738 START TEST nvmf_connect_disconnect 00:12:05.738 ************************************ 00:12:05.738 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:05.738 * Looking for test storage... 00:12:05.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.738 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:05.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.739 --rc genhtml_branch_coverage=1 00:12:05.739 --rc genhtml_function_coverage=1 00:12:05.739 --rc genhtml_legend=1 00:12:05.739 --rc geninfo_all_blocks=1 00:12:05.739 --rc geninfo_unexecuted_blocks=1 00:12:05.739 00:12:05.739 ' 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:05.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.739 --rc genhtml_branch_coverage=1 00:12:05.739 --rc genhtml_function_coverage=1 00:12:05.739 --rc genhtml_legend=1 00:12:05.739 --rc geninfo_all_blocks=1 00:12:05.739 --rc geninfo_unexecuted_blocks=1 00:12:05.739 00:12:05.739 ' 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:05.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.739 --rc genhtml_branch_coverage=1 00:12:05.739 --rc genhtml_function_coverage=1 00:12:05.739 --rc genhtml_legend=1 00:12:05.739 --rc geninfo_all_blocks=1 00:12:05.739 --rc geninfo_unexecuted_blocks=1 00:12:05.739 00:12:05.739 ' 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:05.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.739 --rc genhtml_branch_coverage=1 00:12:05.739 --rc genhtml_function_coverage=1 00:12:05.739 --rc genhtml_legend=1 00:12:05.739 --rc geninfo_all_blocks=1 00:12:05.739 --rc geninfo_unexecuted_blocks=1 00:12:05.739 00:12:05.739 ' 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:05.739 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:05.740 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:05.740 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.740 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:05.740 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:05.740 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:05.740 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.740 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.740 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.740 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:05.740 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:05.740 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.740 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.999 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.000 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.000 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.000 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.000 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:11.000 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:11.000 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:11.000 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:11.000 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:11.000 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:11.000 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:11.000 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:11.000 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:11.000 Found net devices under 0000:86:00.0: cvl_0_0 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:11.000 Found net devices under 0000:86:00.1: cvl_0_1 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:12:11.000 00:12:11.000 --- 10.0.0.2 ping statistics --- 00:12:11.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.000 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:12:11.000 00:12:11.000 --- 10.0.0.1 ping statistics --- 00:12:11.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.000 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=1154790 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 1154790 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1154790 ']' 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:11.000 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.001 [2024-12-07 09:46:39.341136] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:11.001 [2024-12-07 09:46:39.341188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.001 [2024-12-07 09:46:39.403859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.001 [2024-12-07 09:46:39.445718] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.001 [2024-12-07 09:46:39.445760] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.001 [2024-12-07 09:46:39.445768] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.001 [2024-12-07 09:46:39.445774] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.001 [2024-12-07 09:46:39.445779] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.001 [2024-12-07 09:46:39.445837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.001 [2024-12-07 09:46:39.445934] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.001 [2024-12-07 09:46:39.446028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.001 [2024-12-07 09:46:39.446029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.001 [2024-12-07 09:46:39.607337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.001 [2024-12-07 09:46:39.661693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:11.001 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:13.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:02.412 rmmod nvme_tcp 00:16:02.412 rmmod nvme_fabrics 00:16:02.412 rmmod nvme_keyring 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 1154790 ']' 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 1154790 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1154790 ']' 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1154790 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.412 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1154790 00:16:02.412 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:02.412 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:02.412 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1154790' 00:16:02.412 killing process with pid 1154790 00:16:02.412 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1154790 00:16:02.412 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1154790 00:16:02.671 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:02.671 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:02.671 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:02.671 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:02.671 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:02.671 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:16:02.671 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:16:02.671 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:02.671 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:02.671 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.671 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.671 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.577 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:04.577 00:16:04.577 real 3m59.345s 00:16:04.577 user 15m16.674s 00:16:04.577 sys 0m24.672s 00:16:04.577 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:04.577 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:04.577 ************************************ 00:16:04.577 END TEST nvmf_connect_disconnect 00:16:04.577 ************************************ 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.836 ************************************ 00:16:04.836 START TEST nvmf_multitarget 00:16:04.836 ************************************ 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:04.836 * Looking for test storage... 00:16:04.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:04.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.836 --rc genhtml_branch_coverage=1 00:16:04.836 --rc genhtml_function_coverage=1 00:16:04.836 --rc genhtml_legend=1 00:16:04.836 --rc geninfo_all_blocks=1 00:16:04.836 --rc geninfo_unexecuted_blocks=1 00:16:04.836 00:16:04.836 ' 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:04.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.836 --rc genhtml_branch_coverage=1 00:16:04.836 --rc genhtml_function_coverage=1 00:16:04.836 --rc genhtml_legend=1 00:16:04.836 --rc geninfo_all_blocks=1 00:16:04.836 --rc geninfo_unexecuted_blocks=1 00:16:04.836 00:16:04.836 ' 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:04.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.836 --rc genhtml_branch_coverage=1 00:16:04.836 --rc genhtml_function_coverage=1 00:16:04.836 --rc genhtml_legend=1 00:16:04.836 --rc geninfo_all_blocks=1 00:16:04.836 --rc geninfo_unexecuted_blocks=1 00:16:04.836 00:16:04.836 ' 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:04.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.836 --rc genhtml_branch_coverage=1 00:16:04.836 --rc genhtml_function_coverage=1 00:16:04.836 --rc genhtml_legend=1 00:16:04.836 --rc geninfo_all_blocks=1 00:16:04.836 --rc geninfo_unexecuted_blocks=1 00:16:04.836 00:16:04.836 ' 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.836 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:04.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.837 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.095 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:05.096 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:05.096 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:05.096 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:10.359 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:10.360 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:10.360 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:10.360 Found net devices under 0000:86:00.0: cvl_0_0 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:10.360 Found net devices under 0000:86:00.1: cvl_0_1 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:10.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:16:10.360 00:16:10.360 --- 10.0.0.2 ping statistics --- 00:16:10.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.360 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:16:10.360 00:16:10.360 --- 10.0.0.1 ping statistics --- 00:16:10.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.360 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=1198078 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 1198078 00:16:10.360 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1198078 ']' 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:10.361 [2024-12-07 09:50:38.640747] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:10.361 [2024-12-07 09:50:38.640802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.361 [2024-12-07 09:50:38.700018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.361 [2024-12-07 09:50:38.744185] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.361 [2024-12-07 09:50:38.744225] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.361 [2024-12-07 09:50:38.744233] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.361 [2024-12-07 09:50:38.744239] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.361 [2024-12-07 09:50:38.744244] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.361 [2024-12-07 09:50:38.744287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.361 [2024-12-07 09:50:38.744379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.361 [2024-12-07 09:50:38.744448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.361 [2024-12-07 09:50:38.744449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:10.361 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:10.618 "nvmf_tgt_1" 00:16:10.618 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:10.618 "nvmf_tgt_2" 00:16:10.618 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:10.618 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:10.618 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:10.618 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:10.874 true 00:16:10.874 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:10.874 true 00:16:10.875 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:10.875 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:11.131 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:11.131 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:11.132 rmmod nvme_tcp 00:16:11.132 rmmod nvme_fabrics 00:16:11.132 rmmod nvme_keyring 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 1198078 ']' 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 1198078 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1198078 ']' 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1198078 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1198078 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1198078' 00:16:11.132 killing process with pid 1198078 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1198078 00:16:11.132 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1198078 00:16:11.390 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:11.390 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:11.390 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:11.390 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:11.390 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:16:11.390 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:11.390 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:16:11.390 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:11.390 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:11.390 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.390 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.390 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:13.919 00:16:13.919 real 0m8.662s 00:16:13.919 user 0m6.791s 00:16:13.919 sys 0m4.332s 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:13.919 ************************************ 00:16:13.919 END TEST nvmf_multitarget 00:16:13.919 ************************************ 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.919 ************************************ 00:16:13.919 START TEST nvmf_rpc 00:16:13.919 ************************************ 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:13.919 * Looking for test storage... 00:16:13.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:13.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.919 --rc genhtml_branch_coverage=1 00:16:13.919 --rc genhtml_function_coverage=1 00:16:13.919 --rc genhtml_legend=1 00:16:13.919 --rc geninfo_all_blocks=1 00:16:13.919 --rc geninfo_unexecuted_blocks=1 00:16:13.919 00:16:13.919 ' 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:13.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.919 --rc genhtml_branch_coverage=1 00:16:13.919 --rc genhtml_function_coverage=1 00:16:13.919 --rc genhtml_legend=1 00:16:13.919 --rc geninfo_all_blocks=1 00:16:13.919 --rc geninfo_unexecuted_blocks=1 00:16:13.919 00:16:13.919 ' 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:13.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.919 --rc genhtml_branch_coverage=1 00:16:13.919 --rc genhtml_function_coverage=1 00:16:13.919 --rc genhtml_legend=1 00:16:13.919 --rc geninfo_all_blocks=1 00:16:13.919 --rc geninfo_unexecuted_blocks=1 00:16:13.919 00:16:13.919 ' 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:13.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.919 --rc genhtml_branch_coverage=1 00:16:13.919 --rc genhtml_function_coverage=1 00:16:13.919 --rc genhtml_legend=1 00:16:13.919 --rc geninfo_all_blocks=1 00:16:13.919 --rc geninfo_unexecuted_blocks=1 00:16:13.919 00:16:13.919 ' 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:13.919 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:13.920 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:19.194 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:19.195 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:19.195 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:19.195 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:19.196 Found net devices under 0000:86:00.0: cvl_0_0 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:19.196 Found net devices under 0000:86:00.1: cvl_0_1 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.196 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.197 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.197 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.197 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:19.197 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:19.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:16:19.463 00:16:19.463 --- 10.0.0.2 ping statistics --- 00:16:19.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.463 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:16:19.463 00:16:19.463 --- 10.0.0.1 ping statistics --- 00:16:19.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.463 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=1201789 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 1201789 00:16:19.463 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1201789 ']' 00:16:19.463 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.463 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.463 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.463 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.463 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.463 [2024-12-07 09:50:48.048382] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:19.463 [2024-12-07 09:50:48.048428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.463 [2024-12-07 09:50:48.107394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.463 [2024-12-07 09:50:48.148720] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.463 [2024-12-07 09:50:48.148761] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.463 [2024-12-07 09:50:48.148768] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.463 [2024-12-07 09:50:48.148774] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.463 [2024-12-07 09:50:48.148779] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.463 [2024-12-07 09:50:48.148840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.463 [2024-12-07 09:50:48.148938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.463 [2024-12-07 09:50:48.149029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.463 [2024-12-07 09:50:48.149031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:19.721 "tick_rate": 2300000000, 00:16:19.721 "poll_groups": [ 00:16:19.721 { 00:16:19.721 "name": "nvmf_tgt_poll_group_000", 00:16:19.721 "admin_qpairs": 0, 00:16:19.721 "io_qpairs": 0, 00:16:19.721 "current_admin_qpairs": 0, 00:16:19.721 "current_io_qpairs": 0, 00:16:19.721 "pending_bdev_io": 0, 00:16:19.721 "completed_nvme_io": 0, 00:16:19.721 "transports": [] 00:16:19.721 }, 00:16:19.721 { 00:16:19.721 "name": "nvmf_tgt_poll_group_001", 00:16:19.721 "admin_qpairs": 0, 00:16:19.721 "io_qpairs": 0, 00:16:19.721 "current_admin_qpairs": 0, 00:16:19.721 "current_io_qpairs": 0, 00:16:19.721 "pending_bdev_io": 0, 00:16:19.721 "completed_nvme_io": 0, 00:16:19.721 "transports": [] 00:16:19.721 }, 00:16:19.721 { 00:16:19.721 "name": "nvmf_tgt_poll_group_002", 00:16:19.721 "admin_qpairs": 0, 00:16:19.721 "io_qpairs": 0, 00:16:19.721 "current_admin_qpairs": 0, 00:16:19.721 "current_io_qpairs": 0, 00:16:19.721 "pending_bdev_io": 0, 00:16:19.721 "completed_nvme_io": 0, 00:16:19.721 "transports": [] 00:16:19.721 }, 00:16:19.721 { 00:16:19.721 "name": "nvmf_tgt_poll_group_003", 00:16:19.721 "admin_qpairs": 0, 00:16:19.721 "io_qpairs": 0, 00:16:19.721 "current_admin_qpairs": 0, 00:16:19.721 "current_io_qpairs": 0, 00:16:19.721 "pending_bdev_io": 0, 00:16:19.721 "completed_nvme_io": 0, 00:16:19.721 "transports": [] 00:16:19.721 } 00:16:19.721 ] 00:16:19.721 }' 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:19.721 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.722 [2024-12-07 09:50:48.411258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:19.722 "tick_rate": 2300000000, 00:16:19.722 "poll_groups": [ 00:16:19.722 { 00:16:19.722 "name": "nvmf_tgt_poll_group_000", 00:16:19.722 "admin_qpairs": 0, 00:16:19.722 "io_qpairs": 0, 00:16:19.722 "current_admin_qpairs": 0, 00:16:19.722 "current_io_qpairs": 0, 00:16:19.722 "pending_bdev_io": 0, 00:16:19.722 "completed_nvme_io": 0, 00:16:19.722 "transports": [ 00:16:19.722 { 00:16:19.722 "trtype": "TCP" 00:16:19.722 } 00:16:19.722 ] 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "name": "nvmf_tgt_poll_group_001", 00:16:19.722 "admin_qpairs": 0, 00:16:19.722 "io_qpairs": 0, 00:16:19.722 "current_admin_qpairs": 0, 00:16:19.722 "current_io_qpairs": 0, 00:16:19.722 "pending_bdev_io": 0, 00:16:19.722 "completed_nvme_io": 0, 00:16:19.722 "transports": [ 00:16:19.722 { 00:16:19.722 "trtype": "TCP" 00:16:19.722 } 00:16:19.722 ] 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "name": "nvmf_tgt_poll_group_002", 00:16:19.722 "admin_qpairs": 0, 00:16:19.722 "io_qpairs": 0, 00:16:19.722 "current_admin_qpairs": 0, 00:16:19.722 "current_io_qpairs": 0, 00:16:19.722 "pending_bdev_io": 0, 00:16:19.722 "completed_nvme_io": 0, 00:16:19.722 "transports": [ 00:16:19.722 { 00:16:19.722 "trtype": "TCP" 00:16:19.722 } 00:16:19.722 ] 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "name": "nvmf_tgt_poll_group_003", 00:16:19.722 "admin_qpairs": 0, 00:16:19.722 "io_qpairs": 0, 00:16:19.722 "current_admin_qpairs": 0, 00:16:19.722 "current_io_qpairs": 0, 00:16:19.722 "pending_bdev_io": 0, 00:16:19.722 "completed_nvme_io": 0, 00:16:19.722 "transports": [ 00:16:19.722 { 00:16:19.722 "trtype": "TCP" 00:16:19.722 } 00:16:19.722 ] 00:16:19.722 } 00:16:19.722 ] 00:16:19.722 }' 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:19.722 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.979 Malloc1 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.979 [2024-12-07 09:50:48.579305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.979 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:19.980 [2024-12-07 09:50:48.608001] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:16:19.980 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:19.980 could not add new controller: failed to write to nvme-fabrics device 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.980 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:21.355 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:21.355 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:21.355 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.355 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:21.355 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:23.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:23.256 [2024-12-07 09:50:51.930234] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:16:23.256 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:23.256 could not add new controller: failed to write to nvme-fabrics device 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.256 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:24.631 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:24.631 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:24.631 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.631 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:24.631 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:26.538 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:26.538 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:26.538 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:26.538 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:26.539 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.539 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:26.539 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:26.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.539 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:26.539 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:26.539 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:26.539 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:26.539 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:26.539 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.797 [2024-12-07 09:50:55.299743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.797 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:28.169 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:28.169 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:28.169 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.169 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:28.169 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.071 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.072 [2024-12-07 09:50:58.659232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.072 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:31.447 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:31.447 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:31.447 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.447 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:31.447 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:33.341 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:33.341 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:33.341 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.341 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:33.341 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.341 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:33.341 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:33.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.341 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:33.341 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:33.341 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:33.341 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.341 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.341 [2024-12-07 09:51:02.062599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.598 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.598 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:33.598 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.598 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.598 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.598 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.598 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.598 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.598 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.598 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.968 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:34.968 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:34.968 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.968 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:34.968 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:36.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.863 [2024-12-07 09:51:05.466313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.863 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.235 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:38.235 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:38.235 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.235 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:38.236 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.136 [2024-12-07 09:51:08.813291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.136 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.511 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:41.511 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:41.511 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.511 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:41.511 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:43.413 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:43.413 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:43.413 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:43.413 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:43.413 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.413 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:43.413 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.413 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.414 [2024-12-07 09:51:12.080223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.414 [2024-12-07 09:51:12.128320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.414 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 [2024-12-07 09:51:12.176498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 [2024-12-07 09:51:12.224660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 [2024-12-07 09:51:12.272793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.673 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:43.674 "tick_rate": 2300000000, 00:16:43.674 "poll_groups": [ 00:16:43.674 { 00:16:43.674 "name": "nvmf_tgt_poll_group_000", 00:16:43.674 "admin_qpairs": 2, 00:16:43.674 "io_qpairs": 168, 00:16:43.674 "current_admin_qpairs": 0, 00:16:43.674 "current_io_qpairs": 0, 00:16:43.674 "pending_bdev_io": 0, 00:16:43.674 "completed_nvme_io": 266, 00:16:43.674 "transports": [ 00:16:43.674 { 00:16:43.674 "trtype": "TCP" 00:16:43.674 } 00:16:43.674 ] 00:16:43.674 }, 00:16:43.674 { 00:16:43.674 "name": "nvmf_tgt_poll_group_001", 00:16:43.674 "admin_qpairs": 2, 00:16:43.674 "io_qpairs": 168, 00:16:43.674 "current_admin_qpairs": 0, 00:16:43.674 "current_io_qpairs": 0, 00:16:43.674 "pending_bdev_io": 0, 00:16:43.674 "completed_nvme_io": 269, 00:16:43.674 "transports": [ 00:16:43.674 { 00:16:43.674 "trtype": "TCP" 00:16:43.674 } 00:16:43.674 ] 00:16:43.674 }, 00:16:43.674 { 00:16:43.674 "name": "nvmf_tgt_poll_group_002", 00:16:43.674 "admin_qpairs": 1, 00:16:43.674 "io_qpairs": 168, 00:16:43.674 "current_admin_qpairs": 0, 00:16:43.674 "current_io_qpairs": 0, 00:16:43.674 "pending_bdev_io": 0, 00:16:43.674 "completed_nvme_io": 218, 00:16:43.674 "transports": [ 00:16:43.674 { 00:16:43.674 "trtype": "TCP" 00:16:43.674 } 00:16:43.674 ] 00:16:43.674 }, 00:16:43.674 { 00:16:43.674 "name": "nvmf_tgt_poll_group_003", 00:16:43.674 "admin_qpairs": 2, 00:16:43.674 "io_qpairs": 168, 00:16:43.674 "current_admin_qpairs": 0, 00:16:43.674 "current_io_qpairs": 0, 00:16:43.674 "pending_bdev_io": 0, 00:16:43.674 "completed_nvme_io": 269, 00:16:43.674 "transports": [ 00:16:43.674 { 00:16:43.674 "trtype": "TCP" 00:16:43.674 } 00:16:43.674 ] 00:16:43.674 } 00:16:43.674 ] 00:16:43.674 }' 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:43.674 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:43.932 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:43.932 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:43.932 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:43.932 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:43.932 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:43.932 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:43.933 rmmod nvme_tcp 00:16:43.933 rmmod nvme_fabrics 00:16:43.933 rmmod nvme_keyring 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 1201789 ']' 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 1201789 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1201789 ']' 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1201789 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1201789 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1201789' 00:16:43.933 killing process with pid 1201789 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1201789 00:16:43.933 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1201789 00:16:44.191 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:44.191 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:44.191 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:44.191 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:44.191 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:16:44.191 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:44.191 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:16:44.192 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:44.192 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:44.192 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.192 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.192 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.095 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:46.095 00:16:46.095 real 0m32.711s 00:16:46.095 user 1m39.418s 00:16:46.095 sys 0m6.396s 00:16:46.095 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:46.095 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.095 ************************************ 00:16:46.095 END TEST nvmf_rpc 00:16:46.095 ************************************ 00:16:46.354 09:51:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:46.354 09:51:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:46.354 09:51:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:46.354 09:51:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.354 ************************************ 00:16:46.354 START TEST nvmf_invalid 00:16:46.354 ************************************ 00:16:46.354 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:46.354 * Looking for test storage... 00:16:46.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.355 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:46.355 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:46.355 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:46.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.355 --rc genhtml_branch_coverage=1 00:16:46.355 --rc genhtml_function_coverage=1 00:16:46.355 --rc genhtml_legend=1 00:16:46.355 --rc geninfo_all_blocks=1 00:16:46.355 --rc geninfo_unexecuted_blocks=1 00:16:46.355 00:16:46.355 ' 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:46.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.355 --rc genhtml_branch_coverage=1 00:16:46.355 --rc genhtml_function_coverage=1 00:16:46.355 --rc genhtml_legend=1 00:16:46.355 --rc geninfo_all_blocks=1 00:16:46.355 --rc geninfo_unexecuted_blocks=1 00:16:46.355 00:16:46.355 ' 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:46.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.355 --rc genhtml_branch_coverage=1 00:16:46.355 --rc genhtml_function_coverage=1 00:16:46.355 --rc genhtml_legend=1 00:16:46.355 --rc geninfo_all_blocks=1 00:16:46.355 --rc geninfo_unexecuted_blocks=1 00:16:46.355 00:16:46.355 ' 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:46.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.355 --rc genhtml_branch_coverage=1 00:16:46.355 --rc genhtml_function_coverage=1 00:16:46.355 --rc genhtml_legend=1 00:16:46.355 --rc geninfo_all_blocks=1 00:16:46.355 --rc geninfo_unexecuted_blocks=1 00:16:46.355 00:16:46.355 ' 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.355 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:46.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:46.613 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:53.175 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:53.175 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:53.175 Found net devices under 0000:86:00.0: cvl_0_0 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:53.175 Found net devices under 0000:86:00.1: cvl_0_1 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.175 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:53.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:16:53.176 00:16:53.176 --- 10.0.0.2 ping statistics --- 00:16:53.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.176 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:16:53.176 00:16:53.176 --- 10.0.0.1 ping statistics --- 00:16:53.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.176 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=1209983 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 1209983 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1209983 ']' 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:53.176 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 [2024-12-07 09:51:21.036197] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:53.176 [2024-12-07 09:51:21.036240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.176 [2024-12-07 09:51:21.095079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:53.176 [2024-12-07 09:51:21.137511] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.176 [2024-12-07 09:51:21.137551] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.176 [2024-12-07 09:51:21.137563] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.176 [2024-12-07 09:51:21.137569] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.176 [2024-12-07 09:51:21.137574] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.176 [2024-12-07 09:51:21.137621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.176 [2024-12-07 09:51:21.137718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.176 [2024-12-07 09:51:21.137784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.176 [2024-12-07 09:51:21.137785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31760 00:16:53.176 [2024-12-07 09:51:21.455719] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:53.176 { 00:16:53.176 "nqn": "nqn.2016-06.io.spdk:cnode31760", 00:16:53.176 "tgt_name": "foobar", 00:16:53.176 "method": "nvmf_create_subsystem", 00:16:53.176 "req_id": 1 00:16:53.176 } 00:16:53.176 Got JSON-RPC error response 00:16:53.176 response: 00:16:53.176 { 00:16:53.176 "code": -32603, 00:16:53.176 "message": "Unable to find target foobar" 00:16:53.176 }' 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:53.176 { 00:16:53.176 "nqn": "nqn.2016-06.io.spdk:cnode31760", 00:16:53.176 "tgt_name": "foobar", 00:16:53.176 "method": "nvmf_create_subsystem", 00:16:53.176 "req_id": 1 00:16:53.176 } 00:16:53.176 Got JSON-RPC error response 00:16:53.176 response: 00:16:53.176 { 00:16:53.176 "code": -32603, 00:16:53.176 "message": "Unable to find target foobar" 00:16:53.176 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12837 00:16:53.176 [2024-12-07 09:51:21.668479] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12837: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:53.176 { 00:16:53.176 "nqn": "nqn.2016-06.io.spdk:cnode12837", 00:16:53.176 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:53.176 "method": "nvmf_create_subsystem", 00:16:53.176 "req_id": 1 00:16:53.176 } 00:16:53.176 Got JSON-RPC error response 00:16:53.176 response: 00:16:53.176 { 00:16:53.176 "code": -32602, 00:16:53.176 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:53.176 }' 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:53.176 { 00:16:53.176 "nqn": "nqn.2016-06.io.spdk:cnode12837", 00:16:53.176 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:53.176 "method": "nvmf_create_subsystem", 00:16:53.176 "req_id": 1 00:16:53.176 } 00:16:53.176 Got JSON-RPC error response 00:16:53.176 response: 00:16:53.176 { 00:16:53.176 "code": -32602, 00:16:53.176 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:53.176 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:53.176 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27157 00:16:53.176 [2024-12-07 09:51:21.885179] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27157: invalid model number 'SPDK_Controller' 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:53.434 { 00:16:53.434 "nqn": "nqn.2016-06.io.spdk:cnode27157", 00:16:53.434 "model_number": "SPDK_Controller\u001f", 00:16:53.434 "method": "nvmf_create_subsystem", 00:16:53.434 "req_id": 1 00:16:53.434 } 00:16:53.434 Got JSON-RPC error response 00:16:53.434 response: 00:16:53.434 { 00:16:53.434 "code": -32602, 00:16:53.434 "message": "Invalid MN SPDK_Controller\u001f" 00:16:53.434 }' 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:53.434 { 00:16:53.434 "nqn": "nqn.2016-06.io.spdk:cnode27157", 00:16:53.434 "model_number": "SPDK_Controller\u001f", 00:16:53.434 "method": "nvmf_create_subsystem", 00:16:53.434 "req_id": 1 00:16:53.434 } 00:16:53.434 Got JSON-RPC error response 00:16:53.434 response: 00:16:53.434 { 00:16:53.434 "code": -32602, 00:16:53.434 "message": "Invalid MN SPDK_Controller\u001f" 00:16:53.434 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:53.434 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:53.435 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '*p:1RHtN\!1+oWVi#nys;' 00:16:53.435 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '*p:1RHtN\!1+oWVi#nys;' nqn.2016-06.io.spdk:cnode14736 00:16:53.693 [2024-12-07 09:51:22.218368] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14736: invalid serial number '*p:1RHtN\!1+oWVi#nys;' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:53.693 { 00:16:53.693 "nqn": "nqn.2016-06.io.spdk:cnode14736", 00:16:53.693 "serial_number": "*p:1RHtN\\!1+oWVi#nys;", 00:16:53.693 "method": "nvmf_create_subsystem", 00:16:53.693 "req_id": 1 00:16:53.693 } 00:16:53.693 Got JSON-RPC error response 00:16:53.693 response: 00:16:53.693 { 00:16:53.693 "code": -32602, 00:16:53.693 "message": "Invalid SN *p:1RHtN\\!1+oWVi#nys;" 00:16:53.693 }' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:53.693 { 00:16:53.693 "nqn": "nqn.2016-06.io.spdk:cnode14736", 00:16:53.693 "serial_number": "*p:1RHtN\\!1+oWVi#nys;", 00:16:53.693 "method": "nvmf_create_subsystem", 00:16:53.693 "req_id": 1 00:16:53.693 } 00:16:53.693 Got JSON-RPC error response 00:16:53.693 response: 00:16:53.693 { 00:16:53.693 "code": -32602, 00:16:53.693 "message": "Invalid SN *p:1RHtN\\!1+oWVi#nys;" 00:16:53.693 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:53.693 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.694 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:53.951 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ';m3Am'\''r/aW-jBBY(LdTfUS,KX*4,=x?d)wqKn#@x8' 00:16:53.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ';m3Am'\''r/aW-jBBY(LdTfUS,KX*4,=x?d)wqKn#@x8' nqn.2016-06.io.spdk:cnode11357 00:16:54.208 [2024-12-07 09:51:22.691945] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11357: invalid model number ';m3Am'r/aW-jBBY(LdTfUS,KX*4,=x?d)wqKn#@x8' 00:16:54.208 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:54.208 { 00:16:54.209 "nqn": "nqn.2016-06.io.spdk:cnode11357", 00:16:54.209 "model_number": ";m3Am'\''r/aW-jBBY(LdTfUS,KX*4,=x?d)wqKn#@x8", 00:16:54.209 "method": "nvmf_create_subsystem", 00:16:54.209 "req_id": 1 00:16:54.209 } 00:16:54.209 Got JSON-RPC error response 00:16:54.209 response: 00:16:54.209 { 00:16:54.209 "code": -32602, 00:16:54.209 "message": "Invalid MN ;m3Am'\''r/aW-jBBY(LdTfUS,KX*4,=x?d)wqKn#@x8" 00:16:54.209 }' 00:16:54.209 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:54.209 { 00:16:54.209 "nqn": "nqn.2016-06.io.spdk:cnode11357", 00:16:54.209 "model_number": ";m3Am'r/aW-jBBY(LdTfUS,KX*4,=x?d)wqKn#@x8", 00:16:54.209 "method": "nvmf_create_subsystem", 00:16:54.209 "req_id": 1 00:16:54.209 } 00:16:54.209 Got JSON-RPC error response 00:16:54.209 response: 00:16:54.209 { 00:16:54.209 "code": -32602, 00:16:54.209 "message": "Invalid MN ;m3Am'r/aW-jBBY(LdTfUS,KX*4,=x?d)wqKn#@x8" 00:16:54.209 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:54.209 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:54.209 [2024-12-07 09:51:22.888683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.209 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:54.466 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:54.466 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:54.466 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:54.466 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:54.466 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:54.723 [2024-12-07 09:51:23.322135] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:54.723 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:54.723 { 00:16:54.723 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:54.723 "listen_address": { 00:16:54.723 "trtype": "tcp", 00:16:54.723 "traddr": "", 00:16:54.723 "trsvcid": "4421" 00:16:54.723 }, 00:16:54.723 "method": "nvmf_subsystem_remove_listener", 00:16:54.723 "req_id": 1 00:16:54.723 } 00:16:54.723 Got JSON-RPC error response 00:16:54.723 response: 00:16:54.723 { 00:16:54.723 "code": -32602, 00:16:54.723 "message": "Invalid parameters" 00:16:54.723 }' 00:16:54.723 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:54.723 { 00:16:54.723 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:54.723 "listen_address": { 00:16:54.723 "trtype": "tcp", 00:16:54.723 "traddr": "", 00:16:54.723 "trsvcid": "4421" 00:16:54.723 }, 00:16:54.723 "method": "nvmf_subsystem_remove_listener", 00:16:54.723 "req_id": 1 00:16:54.723 } 00:16:54.723 Got JSON-RPC error response 00:16:54.723 response: 00:16:54.723 { 00:16:54.723 "code": -32602, 00:16:54.723 "message": "Invalid parameters" 00:16:54.723 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:54.723 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3912 -i 0 00:16:54.981 [2024-12-07 09:51:23.526813] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3912: invalid cntlid range [0-65519] 00:16:54.981 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:54.981 { 00:16:54.981 "nqn": "nqn.2016-06.io.spdk:cnode3912", 00:16:54.981 "min_cntlid": 0, 00:16:54.981 "method": "nvmf_create_subsystem", 00:16:54.981 "req_id": 1 00:16:54.981 } 00:16:54.981 Got JSON-RPC error response 00:16:54.981 response: 00:16:54.981 { 00:16:54.981 "code": -32602, 00:16:54.981 "message": "Invalid cntlid range [0-65519]" 00:16:54.981 }' 00:16:54.981 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:54.981 { 00:16:54.981 "nqn": "nqn.2016-06.io.spdk:cnode3912", 00:16:54.981 "min_cntlid": 0, 00:16:54.981 "method": "nvmf_create_subsystem", 00:16:54.981 "req_id": 1 00:16:54.981 } 00:16:54.981 Got JSON-RPC error response 00:16:54.981 response: 00:16:54.981 { 00:16:54.981 "code": -32602, 00:16:54.981 "message": "Invalid cntlid range [0-65519]" 00:16:54.981 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:54.981 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28619 -i 65520 00:16:55.239 [2024-12-07 09:51:23.731526] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28619: invalid cntlid range [65520-65519] 00:16:55.239 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:55.239 { 00:16:55.239 "nqn": "nqn.2016-06.io.spdk:cnode28619", 00:16:55.239 "min_cntlid": 65520, 00:16:55.239 "method": "nvmf_create_subsystem", 00:16:55.239 "req_id": 1 00:16:55.239 } 00:16:55.239 Got JSON-RPC error response 00:16:55.239 response: 00:16:55.239 { 00:16:55.239 "code": -32602, 00:16:55.239 "message": "Invalid cntlid range [65520-65519]" 00:16:55.239 }' 00:16:55.239 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:55.239 { 00:16:55.239 "nqn": "nqn.2016-06.io.spdk:cnode28619", 00:16:55.239 "min_cntlid": 65520, 00:16:55.239 "method": "nvmf_create_subsystem", 00:16:55.239 "req_id": 1 00:16:55.239 } 00:16:55.239 Got JSON-RPC error response 00:16:55.239 response: 00:16:55.239 { 00:16:55.239 "code": -32602, 00:16:55.239 "message": "Invalid cntlid range [65520-65519]" 00:16:55.239 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:55.239 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3052 -I 0 00:16:55.239 [2024-12-07 09:51:23.924200] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3052: invalid cntlid range [1-0] 00:16:55.239 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:55.239 { 00:16:55.239 "nqn": "nqn.2016-06.io.spdk:cnode3052", 00:16:55.239 "max_cntlid": 0, 00:16:55.239 "method": "nvmf_create_subsystem", 00:16:55.239 "req_id": 1 00:16:55.239 } 00:16:55.239 Got JSON-RPC error response 00:16:55.239 response: 00:16:55.239 { 00:16:55.239 "code": -32602, 00:16:55.239 "message": "Invalid cntlid range [1-0]" 00:16:55.239 }' 00:16:55.239 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:55.239 { 00:16:55.239 "nqn": "nqn.2016-06.io.spdk:cnode3052", 00:16:55.239 "max_cntlid": 0, 00:16:55.239 "method": "nvmf_create_subsystem", 00:16:55.239 "req_id": 1 00:16:55.239 } 00:16:55.239 Got JSON-RPC error response 00:16:55.239 response: 00:16:55.239 { 00:16:55.239 "code": -32602, 00:16:55.239 "message": "Invalid cntlid range [1-0]" 00:16:55.239 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:55.239 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9334 -I 65520 00:16:55.497 [2024-12-07 09:51:24.124871] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9334: invalid cntlid range [1-65520] 00:16:55.497 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:55.497 { 00:16:55.497 "nqn": "nqn.2016-06.io.spdk:cnode9334", 00:16:55.497 "max_cntlid": 65520, 00:16:55.497 "method": "nvmf_create_subsystem", 00:16:55.497 "req_id": 1 00:16:55.497 } 00:16:55.497 Got JSON-RPC error response 00:16:55.497 response: 00:16:55.497 { 00:16:55.497 "code": -32602, 00:16:55.497 "message": "Invalid cntlid range [1-65520]" 00:16:55.497 }' 00:16:55.497 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:55.497 { 00:16:55.497 "nqn": "nqn.2016-06.io.spdk:cnode9334", 00:16:55.497 "max_cntlid": 65520, 00:16:55.497 "method": "nvmf_create_subsystem", 00:16:55.497 "req_id": 1 00:16:55.497 } 00:16:55.497 Got JSON-RPC error response 00:16:55.497 response: 00:16:55.497 { 00:16:55.497 "code": -32602, 00:16:55.497 "message": "Invalid cntlid range [1-65520]" 00:16:55.497 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:55.497 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9400 -i 6 -I 5 00:16:55.755 [2024-12-07 09:51:24.325585] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9400: invalid cntlid range [6-5] 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:55.755 { 00:16:55.755 "nqn": "nqn.2016-06.io.spdk:cnode9400", 00:16:55.755 "min_cntlid": 6, 00:16:55.755 "max_cntlid": 5, 00:16:55.755 "method": "nvmf_create_subsystem", 00:16:55.755 "req_id": 1 00:16:55.755 } 00:16:55.755 Got JSON-RPC error response 00:16:55.755 response: 00:16:55.755 { 00:16:55.755 "code": -32602, 00:16:55.755 "message": "Invalid cntlid range [6-5]" 00:16:55.755 }' 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:55.755 { 00:16:55.755 "nqn": "nqn.2016-06.io.spdk:cnode9400", 00:16:55.755 "min_cntlid": 6, 00:16:55.755 "max_cntlid": 5, 00:16:55.755 "method": "nvmf_create_subsystem", 00:16:55.755 "req_id": 1 00:16:55.755 } 00:16:55.755 Got JSON-RPC error response 00:16:55.755 response: 00:16:55.755 { 00:16:55.755 "code": -32602, 00:16:55.755 "message": "Invalid cntlid range [6-5]" 00:16:55.755 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:55.755 { 00:16:55.755 "name": "foobar", 00:16:55.755 "method": "nvmf_delete_target", 00:16:55.755 "req_id": 1 00:16:55.755 } 00:16:55.755 Got JSON-RPC error response 00:16:55.755 response: 00:16:55.755 { 00:16:55.755 "code": -32602, 00:16:55.755 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:55.755 }' 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:55.755 { 00:16:55.755 "name": "foobar", 00:16:55.755 "method": "nvmf_delete_target", 00:16:55.755 "req_id": 1 00:16:55.755 } 00:16:55.755 Got JSON-RPC error response 00:16:55.755 response: 00:16:55.755 { 00:16:55.755 "code": -32602, 00:16:55.755 "message": "The specified target doesn't exist, cannot delete it." 00:16:55.755 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.755 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:55.755 rmmod nvme_tcp 00:16:56.013 rmmod nvme_fabrics 00:16:56.013 rmmod nvme_keyring 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 1209983 ']' 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 1209983 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1209983 ']' 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1209983 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1209983 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1209983' 00:16:56.013 killing process with pid 1209983 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1209983 00:16:56.013 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1209983 00:16:56.271 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:56.271 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:56.271 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:56.271 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:56.271 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:16:56.271 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:56.271 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:16:56.271 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:56.271 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:56.271 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.271 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.271 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.175 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:58.175 00:16:58.175 real 0m11.963s 00:16:58.175 user 0m18.747s 00:16:58.175 sys 0m5.243s 00:16:58.175 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:58.175 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:58.175 ************************************ 00:16:58.175 END TEST nvmf_invalid 00:16:58.175 ************************************ 00:16:58.175 09:51:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:58.175 09:51:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:58.175 09:51:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:58.175 09:51:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:58.434 ************************************ 00:16:58.434 START TEST nvmf_connect_stress 00:16:58.434 ************************************ 00:16:58.434 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:58.434 * Looking for test storage... 00:16:58.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:58.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.434 --rc genhtml_branch_coverage=1 00:16:58.434 --rc genhtml_function_coverage=1 00:16:58.434 --rc genhtml_legend=1 00:16:58.434 --rc geninfo_all_blocks=1 00:16:58.434 --rc geninfo_unexecuted_blocks=1 00:16:58.434 00:16:58.434 ' 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:58.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.434 --rc genhtml_branch_coverage=1 00:16:58.434 --rc genhtml_function_coverage=1 00:16:58.434 --rc genhtml_legend=1 00:16:58.434 --rc geninfo_all_blocks=1 00:16:58.434 --rc geninfo_unexecuted_blocks=1 00:16:58.434 00:16:58.434 ' 00:16:58.434 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:58.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.434 --rc genhtml_branch_coverage=1 00:16:58.434 --rc genhtml_function_coverage=1 00:16:58.434 --rc genhtml_legend=1 00:16:58.435 --rc geninfo_all_blocks=1 00:16:58.435 --rc geninfo_unexecuted_blocks=1 00:16:58.435 00:16:58.435 ' 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:58.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.435 --rc genhtml_branch_coverage=1 00:16:58.435 --rc genhtml_function_coverage=1 00:16:58.435 --rc genhtml_legend=1 00:16:58.435 --rc geninfo_all_blocks=1 00:16:58.435 --rc geninfo_unexecuted_blocks=1 00:16:58.435 00:16:58.435 ' 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:58.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:58.435 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:03.706 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:03.706 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:03.706 Found net devices under 0000:86:00.0: cvl_0_0 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:03.706 Found net devices under 0000:86:00.1: cvl_0_1 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:03.706 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:03.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:17:03.966 00:17:03.966 --- 10.0.0.2 ping statistics --- 00:17:03.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.966 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:17:03.966 00:17:03.966 --- 10.0.0.1 ping statistics --- 00:17:03.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.966 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=1214148 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 1214148 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1214148 ']' 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.966 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.966 [2024-12-07 09:51:32.594543] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:03.966 [2024-12-07 09:51:32.594589] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.966 [2024-12-07 09:51:32.647788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:03.966 [2024-12-07 09:51:32.690136] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.966 [2024-12-07 09:51:32.690171] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.966 [2024-12-07 09:51:32.690179] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.966 [2024-12-07 09:51:32.690185] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.967 [2024-12-07 09:51:32.690191] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.967 [2024-12-07 09:51:32.690286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.967 [2024-12-07 09:51:32.690385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.967 [2024-12-07 09:51:32.690386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.225 [2024-12-07 09:51:32.820892] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.225 [2024-12-07 09:51:32.849375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.225 NULL1 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1214177 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.225 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.226 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.484 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:04.484 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:04.484 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:04.484 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.485 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.485 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.743 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.743 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:04.743 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.743 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.743 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.001 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.001 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:05.001 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.001 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.001 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.259 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.259 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:05.259 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.259 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.259 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.825 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.825 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:05.825 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.825 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.825 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.085 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.085 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:06.085 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.085 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.085 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.343 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.343 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:06.343 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.343 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.343 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.599 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.599 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:06.599 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.599 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.599 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.856 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.856 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:06.856 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.856 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.856 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.421 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.421 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:07.421 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.421 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.421 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.678 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.679 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:07.679 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.679 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.679 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.935 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.935 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:07.935 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.935 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.935 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.193 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.193 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:08.193 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.193 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.193 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.759 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.759 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:08.759 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.759 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.759 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.018 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.018 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:09.018 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.019 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.019 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.277 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.277 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:09.277 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.277 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.277 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.534 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.534 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:09.534 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.534 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.535 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.792 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.792 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:09.792 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.792 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.792 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.355 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.355 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:10.355 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.355 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.355 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.612 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.612 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:10.612 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.612 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.612 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.876 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.876 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:10.876 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.876 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.876 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.142 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.142 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:11.142 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.142 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.142 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.412 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.412 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:11.412 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.412 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.412 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.765 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.765 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:11.765 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.765 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.765 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.050 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.050 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:12.050 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.050 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.050 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.619 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.619 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:12.619 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.619 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.619 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.875 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.875 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:12.875 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.875 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.876 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.132 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.132 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:13.132 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.132 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.132 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.388 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.388 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:13.388 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.388 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.388 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.952 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.952 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:13.952 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.952 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.952 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.208 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.208 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:14.208 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.208 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.208 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.465 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1214177 00:17:14.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1214177) - No such process 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1214177 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:14.465 rmmod nvme_tcp 00:17:14.465 rmmod nvme_fabrics 00:17:14.465 rmmod nvme_keyring 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 1214148 ']' 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 1214148 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1214148 ']' 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1214148 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1214148 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1214148' 00:17:14.465 killing process with pid 1214148 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1214148 00:17:14.465 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1214148 00:17:14.724 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:14.724 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:14.724 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:14.724 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:14.724 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:17:14.724 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:14.724 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:17:14.724 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:14.724 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:14.724 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.724 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.724 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:17.257 00:17:17.257 real 0m18.490s 00:17:17.257 user 0m39.345s 00:17:17.257 sys 0m7.955s 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.257 ************************************ 00:17:17.257 END TEST nvmf_connect_stress 00:17:17.257 ************************************ 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:17.257 ************************************ 00:17:17.257 START TEST nvmf_fused_ordering 00:17:17.257 ************************************ 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:17.257 * Looking for test storage... 00:17:17.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:17.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.257 --rc genhtml_branch_coverage=1 00:17:17.257 --rc genhtml_function_coverage=1 00:17:17.257 --rc genhtml_legend=1 00:17:17.257 --rc geninfo_all_blocks=1 00:17:17.257 --rc geninfo_unexecuted_blocks=1 00:17:17.257 00:17:17.257 ' 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:17.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.257 --rc genhtml_branch_coverage=1 00:17:17.257 --rc genhtml_function_coverage=1 00:17:17.257 --rc genhtml_legend=1 00:17:17.257 --rc geninfo_all_blocks=1 00:17:17.257 --rc geninfo_unexecuted_blocks=1 00:17:17.257 00:17:17.257 ' 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:17.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.257 --rc genhtml_branch_coverage=1 00:17:17.257 --rc genhtml_function_coverage=1 00:17:17.257 --rc genhtml_legend=1 00:17:17.257 --rc geninfo_all_blocks=1 00:17:17.257 --rc geninfo_unexecuted_blocks=1 00:17:17.257 00:17:17.257 ' 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:17.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.257 --rc genhtml_branch_coverage=1 00:17:17.257 --rc genhtml_function_coverage=1 00:17:17.257 --rc genhtml_legend=1 00:17:17.257 --rc geninfo_all_blocks=1 00:17:17.257 --rc geninfo_unexecuted_blocks=1 00:17:17.257 00:17:17.257 ' 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.257 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:17.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:17.258 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:23.819 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:23.819 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:23.819 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:23.820 Found net devices under 0000:86:00.0: cvl_0_0 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:23.820 Found net devices under 0000:86:00.1: cvl_0_1 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:23.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:17:23.820 00:17:23.820 --- 10.0.0.2 ping statistics --- 00:17:23.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.820 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:23.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:17:23.820 00:17:23.820 --- 10.0.0.1 ping statistics --- 00:17:23.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.820 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=1219554 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 1219554 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1219554 ']' 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:23.820 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:23.820 [2024-12-07 09:51:51.668156] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:23.820 [2024-12-07 09:51:51.668205] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.821 [2024-12-07 09:51:51.724062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.821 [2024-12-07 09:51:51.766006] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.821 [2024-12-07 09:51:51.766043] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.821 [2024-12-07 09:51:51.766054] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.821 [2024-12-07 09:51:51.766060] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.821 [2024-12-07 09:51:51.766066] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.821 [2024-12-07 09:51:51.766100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:23.821 [2024-12-07 09:51:51.896563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:23.821 [2024-12-07 09:51:51.912736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:23.821 NULL1 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.821 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:23.821 [2024-12-07 09:51:51.965883] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:23.821 [2024-12-07 09:51:51.965915] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219574 ] 00:17:23.821 Attached to nqn.2016-06.io.spdk:cnode1 00:17:23.821 Namespace ID: 1 size: 1GB 00:17:23.821 fused_ordering(0) 00:17:23.821 fused_ordering(1) 00:17:23.821 fused_ordering(2) 00:17:23.821 fused_ordering(3) 00:17:23.821 fused_ordering(4) 00:17:23.821 fused_ordering(5) 00:17:23.821 fused_ordering(6) 00:17:23.821 fused_ordering(7) 00:17:23.821 fused_ordering(8) 00:17:23.821 fused_ordering(9) 00:17:23.821 fused_ordering(10) 00:17:23.821 fused_ordering(11) 00:17:23.821 fused_ordering(12) 00:17:23.821 fused_ordering(13) 00:17:23.821 fused_ordering(14) 00:17:23.821 fused_ordering(15) 00:17:23.821 fused_ordering(16) 00:17:23.821 fused_ordering(17) 00:17:23.821 fused_ordering(18) 00:17:23.821 fused_ordering(19) 00:17:23.821 fused_ordering(20) 00:17:23.821 fused_ordering(21) 00:17:23.821 fused_ordering(22) 00:17:23.821 fused_ordering(23) 00:17:23.821 fused_ordering(24) 00:17:23.821 fused_ordering(25) 00:17:23.821 fused_ordering(26) 00:17:23.821 fused_ordering(27) 00:17:23.821 fused_ordering(28) 00:17:23.821 fused_ordering(29) 00:17:23.821 fused_ordering(30) 00:17:23.821 fused_ordering(31) 00:17:23.821 fused_ordering(32) 00:17:23.821 fused_ordering(33) 00:17:23.821 fused_ordering(34) 00:17:23.821 fused_ordering(35) 00:17:23.821 fused_ordering(36) 00:17:23.821 fused_ordering(37) 00:17:23.821 fused_ordering(38) 00:17:23.821 fused_ordering(39) 00:17:23.821 fused_ordering(40) 00:17:23.821 fused_ordering(41) 00:17:23.821 fused_ordering(42) 00:17:23.821 fused_ordering(43) 00:17:23.821 fused_ordering(44) 00:17:23.821 fused_ordering(45) 00:17:23.821 fused_ordering(46) 00:17:23.821 fused_ordering(47) 00:17:23.821 fused_ordering(48) 00:17:23.821 fused_ordering(49) 00:17:23.821 fused_ordering(50) 00:17:23.821 fused_ordering(51) 00:17:23.821 fused_ordering(52) 00:17:23.821 fused_ordering(53) 00:17:23.821 fused_ordering(54) 00:17:23.821 fused_ordering(55) 00:17:23.821 fused_ordering(56) 00:17:23.821 fused_ordering(57) 00:17:23.821 fused_ordering(58) 00:17:23.821 fused_ordering(59) 00:17:23.821 fused_ordering(60) 00:17:23.821 fused_ordering(61) 00:17:23.821 fused_ordering(62) 00:17:23.821 fused_ordering(63) 00:17:23.821 fused_ordering(64) 00:17:23.821 fused_ordering(65) 00:17:23.821 fused_ordering(66) 00:17:23.821 fused_ordering(67) 00:17:23.821 fused_ordering(68) 00:17:23.822 fused_ordering(69) 00:17:23.822 fused_ordering(70) 00:17:23.822 fused_ordering(71) 00:17:23.822 fused_ordering(72) 00:17:23.822 fused_ordering(73) 00:17:23.822 fused_ordering(74) 00:17:23.822 fused_ordering(75) 00:17:23.822 fused_ordering(76) 00:17:23.822 fused_ordering(77) 00:17:23.822 fused_ordering(78) 00:17:23.822 fused_ordering(79) 00:17:23.822 fused_ordering(80) 00:17:23.822 fused_ordering(81) 00:17:23.822 fused_ordering(82) 00:17:23.822 fused_ordering(83) 00:17:23.822 fused_ordering(84) 00:17:23.822 fused_ordering(85) 00:17:23.822 fused_ordering(86) 00:17:23.822 fused_ordering(87) 00:17:23.822 fused_ordering(88) 00:17:23.822 fused_ordering(89) 00:17:23.822 fused_ordering(90) 00:17:23.822 fused_ordering(91) 00:17:23.822 fused_ordering(92) 00:17:23.822 fused_ordering(93) 00:17:23.822 fused_ordering(94) 00:17:23.822 fused_ordering(95) 00:17:23.822 fused_ordering(96) 00:17:23.822 fused_ordering(97) 00:17:23.822 fused_ordering(98) 00:17:23.822 fused_ordering(99) 00:17:23.822 fused_ordering(100) 00:17:23.822 fused_ordering(101) 00:17:23.822 fused_ordering(102) 00:17:23.822 fused_ordering(103) 00:17:23.822 fused_ordering(104) 00:17:23.822 fused_ordering(105) 00:17:23.822 fused_ordering(106) 00:17:23.822 fused_ordering(107) 00:17:23.822 fused_ordering(108) 00:17:23.822 fused_ordering(109) 00:17:23.822 fused_ordering(110) 00:17:23.822 fused_ordering(111) 00:17:23.822 fused_ordering(112) 00:17:23.822 fused_ordering(113) 00:17:23.822 fused_ordering(114) 00:17:23.822 fused_ordering(115) 00:17:23.822 fused_ordering(116) 00:17:23.822 fused_ordering(117) 00:17:23.822 fused_ordering(118) 00:17:23.822 fused_ordering(119) 00:17:23.822 fused_ordering(120) 00:17:23.822 fused_ordering(121) 00:17:23.822 fused_ordering(122) 00:17:23.822 fused_ordering(123) 00:17:23.822 fused_ordering(124) 00:17:23.822 fused_ordering(125) 00:17:23.822 fused_ordering(126) 00:17:23.822 fused_ordering(127) 00:17:23.822 fused_ordering(128) 00:17:23.822 fused_ordering(129) 00:17:23.822 fused_ordering(130) 00:17:23.822 fused_ordering(131) 00:17:23.822 fused_ordering(132) 00:17:23.822 fused_ordering(133) 00:17:23.822 fused_ordering(134) 00:17:23.822 fused_ordering(135) 00:17:23.822 fused_ordering(136) 00:17:23.822 fused_ordering(137) 00:17:23.822 fused_ordering(138) 00:17:23.822 fused_ordering(139) 00:17:23.822 fused_ordering(140) 00:17:23.822 fused_ordering(141) 00:17:23.822 fused_ordering(142) 00:17:23.822 fused_ordering(143) 00:17:23.822 fused_ordering(144) 00:17:23.822 fused_ordering(145) 00:17:23.822 fused_ordering(146) 00:17:23.822 fused_ordering(147) 00:17:23.822 fused_ordering(148) 00:17:23.822 fused_ordering(149) 00:17:23.822 fused_ordering(150) 00:17:23.822 fused_ordering(151) 00:17:23.822 fused_ordering(152) 00:17:23.822 fused_ordering(153) 00:17:23.822 fused_ordering(154) 00:17:23.822 fused_ordering(155) 00:17:23.822 fused_ordering(156) 00:17:23.822 fused_ordering(157) 00:17:23.822 fused_ordering(158) 00:17:23.822 fused_ordering(159) 00:17:23.822 fused_ordering(160) 00:17:23.822 fused_ordering(161) 00:17:23.822 fused_ordering(162) 00:17:23.822 fused_ordering(163) 00:17:23.822 fused_ordering(164) 00:17:23.822 fused_ordering(165) 00:17:23.822 fused_ordering(166) 00:17:23.822 fused_ordering(167) 00:17:23.822 fused_ordering(168) 00:17:23.822 fused_ordering(169) 00:17:23.822 fused_ordering(170) 00:17:23.822 fused_ordering(171) 00:17:23.822 fused_ordering(172) 00:17:23.822 fused_ordering(173) 00:17:23.822 fused_ordering(174) 00:17:23.822 fused_ordering(175) 00:17:23.822 fused_ordering(176) 00:17:23.822 fused_ordering(177) 00:17:23.822 fused_ordering(178) 00:17:23.822 fused_ordering(179) 00:17:23.822 fused_ordering(180) 00:17:23.822 fused_ordering(181) 00:17:23.822 fused_ordering(182) 00:17:23.822 fused_ordering(183) 00:17:23.822 fused_ordering(184) 00:17:23.822 fused_ordering(185) 00:17:23.822 fused_ordering(186) 00:17:23.822 fused_ordering(187) 00:17:23.822 fused_ordering(188) 00:17:23.822 fused_ordering(189) 00:17:23.822 fused_ordering(190) 00:17:23.822 fused_ordering(191) 00:17:23.822 fused_ordering(192) 00:17:23.822 fused_ordering(193) 00:17:23.822 fused_ordering(194) 00:17:23.822 fused_ordering(195) 00:17:23.822 fused_ordering(196) 00:17:23.822 fused_ordering(197) 00:17:23.822 fused_ordering(198) 00:17:23.822 fused_ordering(199) 00:17:23.822 fused_ordering(200) 00:17:23.822 fused_ordering(201) 00:17:23.822 fused_ordering(202) 00:17:23.822 fused_ordering(203) 00:17:23.822 fused_ordering(204) 00:17:23.822 fused_ordering(205) 00:17:24.081 fused_ordering(206) 00:17:24.081 fused_ordering(207) 00:17:24.081 fused_ordering(208) 00:17:24.081 fused_ordering(209) 00:17:24.081 fused_ordering(210) 00:17:24.081 fused_ordering(211) 00:17:24.081 fused_ordering(212) 00:17:24.081 fused_ordering(213) 00:17:24.081 fused_ordering(214) 00:17:24.081 fused_ordering(215) 00:17:24.081 fused_ordering(216) 00:17:24.081 fused_ordering(217) 00:17:24.081 fused_ordering(218) 00:17:24.081 fused_ordering(219) 00:17:24.081 fused_ordering(220) 00:17:24.081 fused_ordering(221) 00:17:24.081 fused_ordering(222) 00:17:24.081 fused_ordering(223) 00:17:24.081 fused_ordering(224) 00:17:24.081 fused_ordering(225) 00:17:24.081 fused_ordering(226) 00:17:24.081 fused_ordering(227) 00:17:24.081 fused_ordering(228) 00:17:24.081 fused_ordering(229) 00:17:24.081 fused_ordering(230) 00:17:24.081 fused_ordering(231) 00:17:24.081 fused_ordering(232) 00:17:24.081 fused_ordering(233) 00:17:24.081 fused_ordering(234) 00:17:24.081 fused_ordering(235) 00:17:24.081 fused_ordering(236) 00:17:24.081 fused_ordering(237) 00:17:24.081 fused_ordering(238) 00:17:24.081 fused_ordering(239) 00:17:24.081 fused_ordering(240) 00:17:24.081 fused_ordering(241) 00:17:24.081 fused_ordering(242) 00:17:24.081 fused_ordering(243) 00:17:24.081 fused_ordering(244) 00:17:24.081 fused_ordering(245) 00:17:24.081 fused_ordering(246) 00:17:24.081 fused_ordering(247) 00:17:24.081 fused_ordering(248) 00:17:24.081 fused_ordering(249) 00:17:24.081 fused_ordering(250) 00:17:24.081 fused_ordering(251) 00:17:24.081 fused_ordering(252) 00:17:24.081 fused_ordering(253) 00:17:24.081 fused_ordering(254) 00:17:24.081 fused_ordering(255) 00:17:24.081 fused_ordering(256) 00:17:24.081 fused_ordering(257) 00:17:24.081 fused_ordering(258) 00:17:24.081 fused_ordering(259) 00:17:24.081 fused_ordering(260) 00:17:24.081 fused_ordering(261) 00:17:24.081 fused_ordering(262) 00:17:24.081 fused_ordering(263) 00:17:24.081 fused_ordering(264) 00:17:24.081 fused_ordering(265) 00:17:24.081 fused_ordering(266) 00:17:24.081 fused_ordering(267) 00:17:24.081 fused_ordering(268) 00:17:24.081 fused_ordering(269) 00:17:24.081 fused_ordering(270) 00:17:24.081 fused_ordering(271) 00:17:24.081 fused_ordering(272) 00:17:24.081 fused_ordering(273) 00:17:24.081 fused_ordering(274) 00:17:24.081 fused_ordering(275) 00:17:24.081 fused_ordering(276) 00:17:24.081 fused_ordering(277) 00:17:24.081 fused_ordering(278) 00:17:24.081 fused_ordering(279) 00:17:24.081 fused_ordering(280) 00:17:24.081 fused_ordering(281) 00:17:24.081 fused_ordering(282) 00:17:24.081 fused_ordering(283) 00:17:24.081 fused_ordering(284) 00:17:24.081 fused_ordering(285) 00:17:24.081 fused_ordering(286) 00:17:24.081 fused_ordering(287) 00:17:24.081 fused_ordering(288) 00:17:24.081 fused_ordering(289) 00:17:24.081 fused_ordering(290) 00:17:24.081 fused_ordering(291) 00:17:24.081 fused_ordering(292) 00:17:24.081 fused_ordering(293) 00:17:24.081 fused_ordering(294) 00:17:24.081 fused_ordering(295) 00:17:24.081 fused_ordering(296) 00:17:24.081 fused_ordering(297) 00:17:24.081 fused_ordering(298) 00:17:24.081 fused_ordering(299) 00:17:24.081 fused_ordering(300) 00:17:24.081 fused_ordering(301) 00:17:24.081 fused_ordering(302) 00:17:24.081 fused_ordering(303) 00:17:24.081 fused_ordering(304) 00:17:24.081 fused_ordering(305) 00:17:24.081 fused_ordering(306) 00:17:24.081 fused_ordering(307) 00:17:24.081 fused_ordering(308) 00:17:24.081 fused_ordering(309) 00:17:24.081 fused_ordering(310) 00:17:24.081 fused_ordering(311) 00:17:24.081 fused_ordering(312) 00:17:24.081 fused_ordering(313) 00:17:24.081 fused_ordering(314) 00:17:24.081 fused_ordering(315) 00:17:24.081 fused_ordering(316) 00:17:24.081 fused_ordering(317) 00:17:24.081 fused_ordering(318) 00:17:24.081 fused_ordering(319) 00:17:24.081 fused_ordering(320) 00:17:24.081 fused_ordering(321) 00:17:24.081 fused_ordering(322) 00:17:24.081 fused_ordering(323) 00:17:24.081 fused_ordering(324) 00:17:24.081 fused_ordering(325) 00:17:24.081 fused_ordering(326) 00:17:24.081 fused_ordering(327) 00:17:24.081 fused_ordering(328) 00:17:24.081 fused_ordering(329) 00:17:24.081 fused_ordering(330) 00:17:24.081 fused_ordering(331) 00:17:24.081 fused_ordering(332) 00:17:24.081 fused_ordering(333) 00:17:24.081 fused_ordering(334) 00:17:24.081 fused_ordering(335) 00:17:24.081 fused_ordering(336) 00:17:24.081 fused_ordering(337) 00:17:24.081 fused_ordering(338) 00:17:24.081 fused_ordering(339) 00:17:24.081 fused_ordering(340) 00:17:24.081 fused_ordering(341) 00:17:24.081 fused_ordering(342) 00:17:24.081 fused_ordering(343) 00:17:24.081 fused_ordering(344) 00:17:24.081 fused_ordering(345) 00:17:24.081 fused_ordering(346) 00:17:24.081 fused_ordering(347) 00:17:24.082 fused_ordering(348) 00:17:24.082 fused_ordering(349) 00:17:24.082 fused_ordering(350) 00:17:24.082 fused_ordering(351) 00:17:24.082 fused_ordering(352) 00:17:24.082 fused_ordering(353) 00:17:24.082 fused_ordering(354) 00:17:24.082 fused_ordering(355) 00:17:24.082 fused_ordering(356) 00:17:24.082 fused_ordering(357) 00:17:24.082 fused_ordering(358) 00:17:24.082 fused_ordering(359) 00:17:24.082 fused_ordering(360) 00:17:24.082 fused_ordering(361) 00:17:24.082 fused_ordering(362) 00:17:24.082 fused_ordering(363) 00:17:24.082 fused_ordering(364) 00:17:24.082 fused_ordering(365) 00:17:24.082 fused_ordering(366) 00:17:24.082 fused_ordering(367) 00:17:24.082 fused_ordering(368) 00:17:24.082 fused_ordering(369) 00:17:24.082 fused_ordering(370) 00:17:24.082 fused_ordering(371) 00:17:24.082 fused_ordering(372) 00:17:24.082 fused_ordering(373) 00:17:24.082 fused_ordering(374) 00:17:24.082 fused_ordering(375) 00:17:24.082 fused_ordering(376) 00:17:24.082 fused_ordering(377) 00:17:24.082 fused_ordering(378) 00:17:24.082 fused_ordering(379) 00:17:24.082 fused_ordering(380) 00:17:24.082 fused_ordering(381) 00:17:24.082 fused_ordering(382) 00:17:24.082 fused_ordering(383) 00:17:24.082 fused_ordering(384) 00:17:24.082 fused_ordering(385) 00:17:24.082 fused_ordering(386) 00:17:24.082 fused_ordering(387) 00:17:24.082 fused_ordering(388) 00:17:24.082 fused_ordering(389) 00:17:24.082 fused_ordering(390) 00:17:24.082 fused_ordering(391) 00:17:24.082 fused_ordering(392) 00:17:24.082 fused_ordering(393) 00:17:24.082 fused_ordering(394) 00:17:24.082 fused_ordering(395) 00:17:24.082 fused_ordering(396) 00:17:24.082 fused_ordering(397) 00:17:24.082 fused_ordering(398) 00:17:24.082 fused_ordering(399) 00:17:24.082 fused_ordering(400) 00:17:24.082 fused_ordering(401) 00:17:24.082 fused_ordering(402) 00:17:24.082 fused_ordering(403) 00:17:24.082 fused_ordering(404) 00:17:24.082 fused_ordering(405) 00:17:24.082 fused_ordering(406) 00:17:24.082 fused_ordering(407) 00:17:24.082 fused_ordering(408) 00:17:24.082 fused_ordering(409) 00:17:24.082 fused_ordering(410) 00:17:24.341 fused_ordering(411) 00:17:24.341 fused_ordering(412) 00:17:24.341 fused_ordering(413) 00:17:24.341 fused_ordering(414) 00:17:24.341 fused_ordering(415) 00:17:24.341 fused_ordering(416) 00:17:24.341 fused_ordering(417) 00:17:24.341 fused_ordering(418) 00:17:24.341 fused_ordering(419) 00:17:24.341 fused_ordering(420) 00:17:24.341 fused_ordering(421) 00:17:24.341 fused_ordering(422) 00:17:24.341 fused_ordering(423) 00:17:24.341 fused_ordering(424) 00:17:24.341 fused_ordering(425) 00:17:24.341 fused_ordering(426) 00:17:24.341 fused_ordering(427) 00:17:24.341 fused_ordering(428) 00:17:24.341 fused_ordering(429) 00:17:24.341 fused_ordering(430) 00:17:24.341 fused_ordering(431) 00:17:24.341 fused_ordering(432) 00:17:24.341 fused_ordering(433) 00:17:24.341 fused_ordering(434) 00:17:24.341 fused_ordering(435) 00:17:24.341 fused_ordering(436) 00:17:24.341 fused_ordering(437) 00:17:24.341 fused_ordering(438) 00:17:24.341 fused_ordering(439) 00:17:24.341 fused_ordering(440) 00:17:24.341 fused_ordering(441) 00:17:24.341 fused_ordering(442) 00:17:24.341 fused_ordering(443) 00:17:24.341 fused_ordering(444) 00:17:24.341 fused_ordering(445) 00:17:24.341 fused_ordering(446) 00:17:24.341 fused_ordering(447) 00:17:24.341 fused_ordering(448) 00:17:24.341 fused_ordering(449) 00:17:24.341 fused_ordering(450) 00:17:24.341 fused_ordering(451) 00:17:24.341 fused_ordering(452) 00:17:24.341 fused_ordering(453) 00:17:24.341 fused_ordering(454) 00:17:24.341 fused_ordering(455) 00:17:24.341 fused_ordering(456) 00:17:24.341 fused_ordering(457) 00:17:24.341 fused_ordering(458) 00:17:24.341 fused_ordering(459) 00:17:24.341 fused_ordering(460) 00:17:24.341 fused_ordering(461) 00:17:24.341 fused_ordering(462) 00:17:24.341 fused_ordering(463) 00:17:24.341 fused_ordering(464) 00:17:24.341 fused_ordering(465) 00:17:24.341 fused_ordering(466) 00:17:24.341 fused_ordering(467) 00:17:24.341 fused_ordering(468) 00:17:24.341 fused_ordering(469) 00:17:24.341 fused_ordering(470) 00:17:24.341 fused_ordering(471) 00:17:24.341 fused_ordering(472) 00:17:24.341 fused_ordering(473) 00:17:24.341 fused_ordering(474) 00:17:24.341 fused_ordering(475) 00:17:24.341 fused_ordering(476) 00:17:24.341 fused_ordering(477) 00:17:24.341 fused_ordering(478) 00:17:24.341 fused_ordering(479) 00:17:24.341 fused_ordering(480) 00:17:24.341 fused_ordering(481) 00:17:24.341 fused_ordering(482) 00:17:24.341 fused_ordering(483) 00:17:24.341 fused_ordering(484) 00:17:24.341 fused_ordering(485) 00:17:24.341 fused_ordering(486) 00:17:24.341 fused_ordering(487) 00:17:24.341 fused_ordering(488) 00:17:24.341 fused_ordering(489) 00:17:24.341 fused_ordering(490) 00:17:24.341 fused_ordering(491) 00:17:24.341 fused_ordering(492) 00:17:24.341 fused_ordering(493) 00:17:24.341 fused_ordering(494) 00:17:24.341 fused_ordering(495) 00:17:24.341 fused_ordering(496) 00:17:24.341 fused_ordering(497) 00:17:24.341 fused_ordering(498) 00:17:24.341 fused_ordering(499) 00:17:24.341 fused_ordering(500) 00:17:24.341 fused_ordering(501) 00:17:24.341 fused_ordering(502) 00:17:24.341 fused_ordering(503) 00:17:24.341 fused_ordering(504) 00:17:24.341 fused_ordering(505) 00:17:24.341 fused_ordering(506) 00:17:24.341 fused_ordering(507) 00:17:24.341 fused_ordering(508) 00:17:24.341 fused_ordering(509) 00:17:24.341 fused_ordering(510) 00:17:24.341 fused_ordering(511) 00:17:24.341 fused_ordering(512) 00:17:24.341 fused_ordering(513) 00:17:24.341 fused_ordering(514) 00:17:24.341 fused_ordering(515) 00:17:24.341 fused_ordering(516) 00:17:24.341 fused_ordering(517) 00:17:24.341 fused_ordering(518) 00:17:24.341 fused_ordering(519) 00:17:24.341 fused_ordering(520) 00:17:24.341 fused_ordering(521) 00:17:24.341 fused_ordering(522) 00:17:24.341 fused_ordering(523) 00:17:24.341 fused_ordering(524) 00:17:24.341 fused_ordering(525) 00:17:24.341 fused_ordering(526) 00:17:24.341 fused_ordering(527) 00:17:24.341 fused_ordering(528) 00:17:24.341 fused_ordering(529) 00:17:24.342 fused_ordering(530) 00:17:24.342 fused_ordering(531) 00:17:24.342 fused_ordering(532) 00:17:24.342 fused_ordering(533) 00:17:24.342 fused_ordering(534) 00:17:24.342 fused_ordering(535) 00:17:24.342 fused_ordering(536) 00:17:24.342 fused_ordering(537) 00:17:24.342 fused_ordering(538) 00:17:24.342 fused_ordering(539) 00:17:24.342 fused_ordering(540) 00:17:24.342 fused_ordering(541) 00:17:24.342 fused_ordering(542) 00:17:24.342 fused_ordering(543) 00:17:24.342 fused_ordering(544) 00:17:24.342 fused_ordering(545) 00:17:24.342 fused_ordering(546) 00:17:24.342 fused_ordering(547) 00:17:24.342 fused_ordering(548) 00:17:24.342 fused_ordering(549) 00:17:24.342 fused_ordering(550) 00:17:24.342 fused_ordering(551) 00:17:24.342 fused_ordering(552) 00:17:24.342 fused_ordering(553) 00:17:24.342 fused_ordering(554) 00:17:24.342 fused_ordering(555) 00:17:24.342 fused_ordering(556) 00:17:24.342 fused_ordering(557) 00:17:24.342 fused_ordering(558) 00:17:24.342 fused_ordering(559) 00:17:24.342 fused_ordering(560) 00:17:24.342 fused_ordering(561) 00:17:24.342 fused_ordering(562) 00:17:24.342 fused_ordering(563) 00:17:24.342 fused_ordering(564) 00:17:24.342 fused_ordering(565) 00:17:24.342 fused_ordering(566) 00:17:24.342 fused_ordering(567) 00:17:24.342 fused_ordering(568) 00:17:24.342 fused_ordering(569) 00:17:24.342 fused_ordering(570) 00:17:24.342 fused_ordering(571) 00:17:24.342 fused_ordering(572) 00:17:24.342 fused_ordering(573) 00:17:24.342 fused_ordering(574) 00:17:24.342 fused_ordering(575) 00:17:24.342 fused_ordering(576) 00:17:24.342 fused_ordering(577) 00:17:24.342 fused_ordering(578) 00:17:24.342 fused_ordering(579) 00:17:24.342 fused_ordering(580) 00:17:24.342 fused_ordering(581) 00:17:24.342 fused_ordering(582) 00:17:24.342 fused_ordering(583) 00:17:24.342 fused_ordering(584) 00:17:24.342 fused_ordering(585) 00:17:24.342 fused_ordering(586) 00:17:24.342 fused_ordering(587) 00:17:24.342 fused_ordering(588) 00:17:24.342 fused_ordering(589) 00:17:24.342 fused_ordering(590) 00:17:24.342 fused_ordering(591) 00:17:24.342 fused_ordering(592) 00:17:24.342 fused_ordering(593) 00:17:24.342 fused_ordering(594) 00:17:24.342 fused_ordering(595) 00:17:24.342 fused_ordering(596) 00:17:24.342 fused_ordering(597) 00:17:24.342 fused_ordering(598) 00:17:24.342 fused_ordering(599) 00:17:24.342 fused_ordering(600) 00:17:24.342 fused_ordering(601) 00:17:24.342 fused_ordering(602) 00:17:24.342 fused_ordering(603) 00:17:24.342 fused_ordering(604) 00:17:24.342 fused_ordering(605) 00:17:24.342 fused_ordering(606) 00:17:24.342 fused_ordering(607) 00:17:24.342 fused_ordering(608) 00:17:24.342 fused_ordering(609) 00:17:24.342 fused_ordering(610) 00:17:24.342 fused_ordering(611) 00:17:24.342 fused_ordering(612) 00:17:24.342 fused_ordering(613) 00:17:24.342 fused_ordering(614) 00:17:24.342 fused_ordering(615) 00:17:24.601 fused_ordering(616) 00:17:24.601 fused_ordering(617) 00:17:24.601 fused_ordering(618) 00:17:24.601 fused_ordering(619) 00:17:24.601 fused_ordering(620) 00:17:24.601 fused_ordering(621) 00:17:24.601 fused_ordering(622) 00:17:24.601 fused_ordering(623) 00:17:24.601 fused_ordering(624) 00:17:24.601 fused_ordering(625) 00:17:24.601 fused_ordering(626) 00:17:24.601 fused_ordering(627) 00:17:24.601 fused_ordering(628) 00:17:24.601 fused_ordering(629) 00:17:24.601 fused_ordering(630) 00:17:24.601 fused_ordering(631) 00:17:24.601 fused_ordering(632) 00:17:24.601 fused_ordering(633) 00:17:24.601 fused_ordering(634) 00:17:24.601 fused_ordering(635) 00:17:24.601 fused_ordering(636) 00:17:24.601 fused_ordering(637) 00:17:24.601 fused_ordering(638) 00:17:24.601 fused_ordering(639) 00:17:24.601 fused_ordering(640) 00:17:24.601 fused_ordering(641) 00:17:24.601 fused_ordering(642) 00:17:24.601 fused_ordering(643) 00:17:24.601 fused_ordering(644) 00:17:24.601 fused_ordering(645) 00:17:24.601 fused_ordering(646) 00:17:24.601 fused_ordering(647) 00:17:24.601 fused_ordering(648) 00:17:24.601 fused_ordering(649) 00:17:24.601 fused_ordering(650) 00:17:24.601 fused_ordering(651) 00:17:24.601 fused_ordering(652) 00:17:24.601 fused_ordering(653) 00:17:24.601 fused_ordering(654) 00:17:24.601 fused_ordering(655) 00:17:24.601 fused_ordering(656) 00:17:24.601 fused_ordering(657) 00:17:24.601 fused_ordering(658) 00:17:24.601 fused_ordering(659) 00:17:24.601 fused_ordering(660) 00:17:24.601 fused_ordering(661) 00:17:24.601 fused_ordering(662) 00:17:24.601 fused_ordering(663) 00:17:24.601 fused_ordering(664) 00:17:24.601 fused_ordering(665) 00:17:24.601 fused_ordering(666) 00:17:24.601 fused_ordering(667) 00:17:24.601 fused_ordering(668) 00:17:24.601 fused_ordering(669) 00:17:24.601 fused_ordering(670) 00:17:24.601 fused_ordering(671) 00:17:24.601 fused_ordering(672) 00:17:24.601 fused_ordering(673) 00:17:24.601 fused_ordering(674) 00:17:24.601 fused_ordering(675) 00:17:24.601 fused_ordering(676) 00:17:24.601 fused_ordering(677) 00:17:24.601 fused_ordering(678) 00:17:24.601 fused_ordering(679) 00:17:24.601 fused_ordering(680) 00:17:24.601 fused_ordering(681) 00:17:24.601 fused_ordering(682) 00:17:24.601 fused_ordering(683) 00:17:24.601 fused_ordering(684) 00:17:24.601 fused_ordering(685) 00:17:24.601 fused_ordering(686) 00:17:24.601 fused_ordering(687) 00:17:24.601 fused_ordering(688) 00:17:24.601 fused_ordering(689) 00:17:24.601 fused_ordering(690) 00:17:24.601 fused_ordering(691) 00:17:24.601 fused_ordering(692) 00:17:24.601 fused_ordering(693) 00:17:24.601 fused_ordering(694) 00:17:24.601 fused_ordering(695) 00:17:24.601 fused_ordering(696) 00:17:24.601 fused_ordering(697) 00:17:24.601 fused_ordering(698) 00:17:24.601 fused_ordering(699) 00:17:24.601 fused_ordering(700) 00:17:24.601 fused_ordering(701) 00:17:24.601 fused_ordering(702) 00:17:24.601 fused_ordering(703) 00:17:24.601 fused_ordering(704) 00:17:24.601 fused_ordering(705) 00:17:24.601 fused_ordering(706) 00:17:24.602 fused_ordering(707) 00:17:24.602 fused_ordering(708) 00:17:24.602 fused_ordering(709) 00:17:24.602 fused_ordering(710) 00:17:24.602 fused_ordering(711) 00:17:24.602 fused_ordering(712) 00:17:24.602 fused_ordering(713) 00:17:24.602 fused_ordering(714) 00:17:24.602 fused_ordering(715) 00:17:24.602 fused_ordering(716) 00:17:24.602 fused_ordering(717) 00:17:24.602 fused_ordering(718) 00:17:24.602 fused_ordering(719) 00:17:24.602 fused_ordering(720) 00:17:24.602 fused_ordering(721) 00:17:24.602 fused_ordering(722) 00:17:24.602 fused_ordering(723) 00:17:24.602 fused_ordering(724) 00:17:24.602 fused_ordering(725) 00:17:24.602 fused_ordering(726) 00:17:24.602 fused_ordering(727) 00:17:24.602 fused_ordering(728) 00:17:24.602 fused_ordering(729) 00:17:24.602 fused_ordering(730) 00:17:24.602 fused_ordering(731) 00:17:24.602 fused_ordering(732) 00:17:24.602 fused_ordering(733) 00:17:24.602 fused_ordering(734) 00:17:24.602 fused_ordering(735) 00:17:24.602 fused_ordering(736) 00:17:24.602 fused_ordering(737) 00:17:24.602 fused_ordering(738) 00:17:24.602 fused_ordering(739) 00:17:24.602 fused_ordering(740) 00:17:24.602 fused_ordering(741) 00:17:24.602 fused_ordering(742) 00:17:24.602 fused_ordering(743) 00:17:24.602 fused_ordering(744) 00:17:24.602 fused_ordering(745) 00:17:24.602 fused_ordering(746) 00:17:24.602 fused_ordering(747) 00:17:24.602 fused_ordering(748) 00:17:24.602 fused_ordering(749) 00:17:24.602 fused_ordering(750) 00:17:24.602 fused_ordering(751) 00:17:24.602 fused_ordering(752) 00:17:24.602 fused_ordering(753) 00:17:24.602 fused_ordering(754) 00:17:24.602 fused_ordering(755) 00:17:24.602 fused_ordering(756) 00:17:24.602 fused_ordering(757) 00:17:24.602 fused_ordering(758) 00:17:24.602 fused_ordering(759) 00:17:24.602 fused_ordering(760) 00:17:24.602 fused_ordering(761) 00:17:24.602 fused_ordering(762) 00:17:24.602 fused_ordering(763) 00:17:24.602 fused_ordering(764) 00:17:24.602 fused_ordering(765) 00:17:24.602 fused_ordering(766) 00:17:24.602 fused_ordering(767) 00:17:24.602 fused_ordering(768) 00:17:24.602 fused_ordering(769) 00:17:24.602 fused_ordering(770) 00:17:24.602 fused_ordering(771) 00:17:24.602 fused_ordering(772) 00:17:24.602 fused_ordering(773) 00:17:24.602 fused_ordering(774) 00:17:24.602 fused_ordering(775) 00:17:24.602 fused_ordering(776) 00:17:24.602 fused_ordering(777) 00:17:24.602 fused_ordering(778) 00:17:24.602 fused_ordering(779) 00:17:24.602 fused_ordering(780) 00:17:24.602 fused_ordering(781) 00:17:24.602 fused_ordering(782) 00:17:24.602 fused_ordering(783) 00:17:24.602 fused_ordering(784) 00:17:24.602 fused_ordering(785) 00:17:24.602 fused_ordering(786) 00:17:24.602 fused_ordering(787) 00:17:24.602 fused_ordering(788) 00:17:24.602 fused_ordering(789) 00:17:24.602 fused_ordering(790) 00:17:24.602 fused_ordering(791) 00:17:24.602 fused_ordering(792) 00:17:24.602 fused_ordering(793) 00:17:24.602 fused_ordering(794) 00:17:24.602 fused_ordering(795) 00:17:24.602 fused_ordering(796) 00:17:24.602 fused_ordering(797) 00:17:24.602 fused_ordering(798) 00:17:24.602 fused_ordering(799) 00:17:24.602 fused_ordering(800) 00:17:24.602 fused_ordering(801) 00:17:24.602 fused_ordering(802) 00:17:24.602 fused_ordering(803) 00:17:24.602 fused_ordering(804) 00:17:24.602 fused_ordering(805) 00:17:24.602 fused_ordering(806) 00:17:24.602 fused_ordering(807) 00:17:24.602 fused_ordering(808) 00:17:24.602 fused_ordering(809) 00:17:24.602 fused_ordering(810) 00:17:24.602 fused_ordering(811) 00:17:24.602 fused_ordering(812) 00:17:24.602 fused_ordering(813) 00:17:24.602 fused_ordering(814) 00:17:24.602 fused_ordering(815) 00:17:24.602 fused_ordering(816) 00:17:24.602 fused_ordering(817) 00:17:24.602 fused_ordering(818) 00:17:24.602 fused_ordering(819) 00:17:24.602 fused_ordering(820) 00:17:25.169 fused_ordering(821) 00:17:25.169 fused_ordering(822) 00:17:25.169 fused_ordering(823) 00:17:25.169 fused_ordering(824) 00:17:25.169 fused_ordering(825) 00:17:25.169 fused_ordering(826) 00:17:25.169 fused_ordering(827) 00:17:25.169 fused_ordering(828) 00:17:25.169 fused_ordering(829) 00:17:25.170 fused_ordering(830) 00:17:25.170 fused_ordering(831) 00:17:25.170 fused_ordering(832) 00:17:25.170 fused_ordering(833) 00:17:25.170 fused_ordering(834) 00:17:25.170 fused_ordering(835) 00:17:25.170 fused_ordering(836) 00:17:25.170 fused_ordering(837) 00:17:25.170 fused_ordering(838) 00:17:25.170 fused_ordering(839) 00:17:25.170 fused_ordering(840) 00:17:25.170 fused_ordering(841) 00:17:25.170 fused_ordering(842) 00:17:25.170 fused_ordering(843) 00:17:25.170 fused_ordering(844) 00:17:25.170 fused_ordering(845) 00:17:25.170 fused_ordering(846) 00:17:25.170 fused_ordering(847) 00:17:25.170 fused_ordering(848) 00:17:25.170 fused_ordering(849) 00:17:25.170 fused_ordering(850) 00:17:25.170 fused_ordering(851) 00:17:25.170 fused_ordering(852) 00:17:25.170 fused_ordering(853) 00:17:25.170 fused_ordering(854) 00:17:25.170 fused_ordering(855) 00:17:25.170 fused_ordering(856) 00:17:25.170 fused_ordering(857) 00:17:25.170 fused_ordering(858) 00:17:25.170 fused_ordering(859) 00:17:25.170 fused_ordering(860) 00:17:25.170 fused_ordering(861) 00:17:25.170 fused_ordering(862) 00:17:25.170 fused_ordering(863) 00:17:25.170 fused_ordering(864) 00:17:25.170 fused_ordering(865) 00:17:25.170 fused_ordering(866) 00:17:25.170 fused_ordering(867) 00:17:25.170 fused_ordering(868) 00:17:25.170 fused_ordering(869) 00:17:25.170 fused_ordering(870) 00:17:25.170 fused_ordering(871) 00:17:25.170 fused_ordering(872) 00:17:25.170 fused_ordering(873) 00:17:25.170 fused_ordering(874) 00:17:25.170 fused_ordering(875) 00:17:25.170 fused_ordering(876) 00:17:25.170 fused_ordering(877) 00:17:25.170 fused_ordering(878) 00:17:25.170 fused_ordering(879) 00:17:25.170 fused_ordering(880) 00:17:25.170 fused_ordering(881) 00:17:25.170 fused_ordering(882) 00:17:25.170 fused_ordering(883) 00:17:25.170 fused_ordering(884) 00:17:25.170 fused_ordering(885) 00:17:25.170 fused_ordering(886) 00:17:25.170 fused_ordering(887) 00:17:25.170 fused_ordering(888) 00:17:25.170 fused_ordering(889) 00:17:25.170 fused_ordering(890) 00:17:25.170 fused_ordering(891) 00:17:25.170 fused_ordering(892) 00:17:25.170 fused_ordering(893) 00:17:25.170 fused_ordering(894) 00:17:25.170 fused_ordering(895) 00:17:25.170 fused_ordering(896) 00:17:25.170 fused_ordering(897) 00:17:25.170 fused_ordering(898) 00:17:25.170 fused_ordering(899) 00:17:25.170 fused_ordering(900) 00:17:25.170 fused_ordering(901) 00:17:25.170 fused_ordering(902) 00:17:25.170 fused_ordering(903) 00:17:25.170 fused_ordering(904) 00:17:25.170 fused_ordering(905) 00:17:25.170 fused_ordering(906) 00:17:25.170 fused_ordering(907) 00:17:25.170 fused_ordering(908) 00:17:25.170 fused_ordering(909) 00:17:25.170 fused_ordering(910) 00:17:25.170 fused_ordering(911) 00:17:25.170 fused_ordering(912) 00:17:25.170 fused_ordering(913) 00:17:25.170 fused_ordering(914) 00:17:25.170 fused_ordering(915) 00:17:25.170 fused_ordering(916) 00:17:25.170 fused_ordering(917) 00:17:25.170 fused_ordering(918) 00:17:25.170 fused_ordering(919) 00:17:25.170 fused_ordering(920) 00:17:25.170 fused_ordering(921) 00:17:25.170 fused_ordering(922) 00:17:25.170 fused_ordering(923) 00:17:25.170 fused_ordering(924) 00:17:25.170 fused_ordering(925) 00:17:25.170 fused_ordering(926) 00:17:25.170 fused_ordering(927) 00:17:25.170 fused_ordering(928) 00:17:25.170 fused_ordering(929) 00:17:25.170 fused_ordering(930) 00:17:25.170 fused_ordering(931) 00:17:25.170 fused_ordering(932) 00:17:25.170 fused_ordering(933) 00:17:25.170 fused_ordering(934) 00:17:25.170 fused_ordering(935) 00:17:25.170 fused_ordering(936) 00:17:25.170 fused_ordering(937) 00:17:25.170 fused_ordering(938) 00:17:25.170 fused_ordering(939) 00:17:25.170 fused_ordering(940) 00:17:25.170 fused_ordering(941) 00:17:25.170 fused_ordering(942) 00:17:25.170 fused_ordering(943) 00:17:25.170 fused_ordering(944) 00:17:25.170 fused_ordering(945) 00:17:25.170 fused_ordering(946) 00:17:25.170 fused_ordering(947) 00:17:25.170 fused_ordering(948) 00:17:25.170 fused_ordering(949) 00:17:25.170 fused_ordering(950) 00:17:25.170 fused_ordering(951) 00:17:25.170 fused_ordering(952) 00:17:25.170 fused_ordering(953) 00:17:25.170 fused_ordering(954) 00:17:25.170 fused_ordering(955) 00:17:25.170 fused_ordering(956) 00:17:25.170 fused_ordering(957) 00:17:25.170 fused_ordering(958) 00:17:25.170 fused_ordering(959) 00:17:25.170 fused_ordering(960) 00:17:25.170 fused_ordering(961) 00:17:25.170 fused_ordering(962) 00:17:25.170 fused_ordering(963) 00:17:25.170 fused_ordering(964) 00:17:25.170 fused_ordering(965) 00:17:25.170 fused_ordering(966) 00:17:25.170 fused_ordering(967) 00:17:25.170 fused_ordering(968) 00:17:25.170 fused_ordering(969) 00:17:25.170 fused_ordering(970) 00:17:25.170 fused_ordering(971) 00:17:25.170 fused_ordering(972) 00:17:25.170 fused_ordering(973) 00:17:25.170 fused_ordering(974) 00:17:25.170 fused_ordering(975) 00:17:25.170 fused_ordering(976) 00:17:25.170 fused_ordering(977) 00:17:25.170 fused_ordering(978) 00:17:25.170 fused_ordering(979) 00:17:25.170 fused_ordering(980) 00:17:25.170 fused_ordering(981) 00:17:25.170 fused_ordering(982) 00:17:25.170 fused_ordering(983) 00:17:25.170 fused_ordering(984) 00:17:25.170 fused_ordering(985) 00:17:25.170 fused_ordering(986) 00:17:25.170 fused_ordering(987) 00:17:25.170 fused_ordering(988) 00:17:25.170 fused_ordering(989) 00:17:25.170 fused_ordering(990) 00:17:25.170 fused_ordering(991) 00:17:25.170 fused_ordering(992) 00:17:25.170 fused_ordering(993) 00:17:25.170 fused_ordering(994) 00:17:25.170 fused_ordering(995) 00:17:25.170 fused_ordering(996) 00:17:25.170 fused_ordering(997) 00:17:25.170 fused_ordering(998) 00:17:25.170 fused_ordering(999) 00:17:25.170 fused_ordering(1000) 00:17:25.170 fused_ordering(1001) 00:17:25.170 fused_ordering(1002) 00:17:25.170 fused_ordering(1003) 00:17:25.170 fused_ordering(1004) 00:17:25.170 fused_ordering(1005) 00:17:25.170 fused_ordering(1006) 00:17:25.170 fused_ordering(1007) 00:17:25.170 fused_ordering(1008) 00:17:25.170 fused_ordering(1009) 00:17:25.170 fused_ordering(1010) 00:17:25.170 fused_ordering(1011) 00:17:25.170 fused_ordering(1012) 00:17:25.170 fused_ordering(1013) 00:17:25.170 fused_ordering(1014) 00:17:25.170 fused_ordering(1015) 00:17:25.170 fused_ordering(1016) 00:17:25.170 fused_ordering(1017) 00:17:25.170 fused_ordering(1018) 00:17:25.170 fused_ordering(1019) 00:17:25.170 fused_ordering(1020) 00:17:25.170 fused_ordering(1021) 00:17:25.170 fused_ordering(1022) 00:17:25.170 fused_ordering(1023) 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:25.170 rmmod nvme_tcp 00:17:25.170 rmmod nvme_fabrics 00:17:25.170 rmmod nvme_keyring 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 1219554 ']' 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 1219554 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1219554 ']' 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1219554 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.170 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1219554 00:17:25.428 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:25.428 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:25.428 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1219554' 00:17:25.428 killing process with pid 1219554 00:17:25.428 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1219554 00:17:25.428 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1219554 00:17:25.428 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:25.428 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:25.428 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:25.428 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:25.428 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:17:25.428 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:25.428 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:17:25.428 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.428 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.428 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.428 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.428 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.960 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:27.960 00:17:27.960 real 0m10.681s 00:17:27.960 user 0m5.027s 00:17:27.960 sys 0m5.853s 00:17:27.960 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:27.960 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.960 ************************************ 00:17:27.960 END TEST nvmf_fused_ordering 00:17:27.960 ************************************ 00:17:27.960 09:51:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:27.960 09:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:27.960 09:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.960 09:51:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.960 ************************************ 00:17:27.960 START TEST nvmf_ns_masking 00:17:27.960 ************************************ 00:17:27.960 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:27.960 * Looking for test storage... 00:17:27.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.960 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:27.960 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:17:27.960 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:27.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.961 --rc genhtml_branch_coverage=1 00:17:27.961 --rc genhtml_function_coverage=1 00:17:27.961 --rc genhtml_legend=1 00:17:27.961 --rc geninfo_all_blocks=1 00:17:27.961 --rc geninfo_unexecuted_blocks=1 00:17:27.961 00:17:27.961 ' 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:27.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.961 --rc genhtml_branch_coverage=1 00:17:27.961 --rc genhtml_function_coverage=1 00:17:27.961 --rc genhtml_legend=1 00:17:27.961 --rc geninfo_all_blocks=1 00:17:27.961 --rc geninfo_unexecuted_blocks=1 00:17:27.961 00:17:27.961 ' 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:27.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.961 --rc genhtml_branch_coverage=1 00:17:27.961 --rc genhtml_function_coverage=1 00:17:27.961 --rc genhtml_legend=1 00:17:27.961 --rc geninfo_all_blocks=1 00:17:27.961 --rc geninfo_unexecuted_blocks=1 00:17:27.961 00:17:27.961 ' 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:27.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.961 --rc genhtml_branch_coverage=1 00:17:27.961 --rc genhtml_function_coverage=1 00:17:27.961 --rc genhtml_legend=1 00:17:27.961 --rc geninfo_all_blocks=1 00:17:27.961 --rc geninfo_unexecuted_blocks=1 00:17:27.961 00:17:27.961 ' 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:27.961 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c4581eca-390f-4475-9dc1-ce967e9fc7cb 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=52657872-2788-4c28-b08d-76e86285151d 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=086901d0-b76f-4e49-b67d-9e5f23637876 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.962 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.227 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:33.228 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:33.228 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:33.228 Found net devices under 0000:86:00.0: cvl_0_0 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:33.228 Found net devices under 0000:86:00.1: cvl_0_1 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:33.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:17:33.228 00:17:33.228 --- 10.0.0.2 ping statistics --- 00:17:33.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.228 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:17:33.228 00:17:33.228 --- 10.0.0.1 ping statistics --- 00:17:33.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.228 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=1223335 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 1223335 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1223335 ']' 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:33.228 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.229 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:33.229 [2024-12-07 09:52:01.798336] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:33.229 [2024-12-07 09:52:01.798385] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.229 [2024-12-07 09:52:01.856121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.229 [2024-12-07 09:52:01.896800] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.229 [2024-12-07 09:52:01.896837] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.229 [2024-12-07 09:52:01.896844] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.229 [2024-12-07 09:52:01.896851] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.229 [2024-12-07 09:52:01.896856] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.229 [2024-12-07 09:52:01.896879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.486 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.486 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:33.486 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:33.486 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:33.486 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:33.486 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.486 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:33.486 [2024-12-07 09:52:02.186996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.486 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:33.486 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:33.486 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:33.745 Malloc1 00:17:33.745 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:34.004 Malloc2 00:17:34.004 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:34.263 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:34.522 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.522 [2024-12-07 09:52:03.163150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.522 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:34.522 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 086901d0-b76f-4e49-b67d-9e5f23637876 -a 10.0.0.2 -s 4420 -i 4 00:17:34.780 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:34.780 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:34.780 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:34.780 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:34.780 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:36.683 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:36.683 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:36.683 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:36.683 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:36.683 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.683 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:36.683 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:36.683 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:36.941 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:36.941 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:36.941 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:36.941 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.941 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:36.941 [ 0]:0x1 00:17:36.941 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:36.941 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.941 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12e2ace5579f435393f7945e26d6f32c 00:17:36.941 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12e2ace5579f435393f7945e26d6f32c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.941 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:37.199 [ 0]:0x1 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12e2ace5579f435393f7945e26d6f32c 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12e2ace5579f435393f7945e26d6f32c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:37.199 [ 1]:0x2 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=61a6ff85a4734b869e023c4b12fa92eb 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 61a6ff85a4734b869e023c4b12fa92eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:37.199 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:37.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.457 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:37.457 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:37.715 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:37.715 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 086901d0-b76f-4e49-b67d-9e5f23637876 -a 10.0.0.2 -s 4420 -i 4 00:17:37.973 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:37.973 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:37.973 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.973 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:37.973 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:37.973 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:39.869 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:39.870 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:40.127 [ 0]:0x2 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=61a6ff85a4734b869e023c4b12fa92eb 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 61a6ff85a4734b869e023c4b12fa92eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.127 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:40.384 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:40.384 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:40.384 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:40.384 [ 0]:0x1 00:17:40.384 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:40.384 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.384 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12e2ace5579f435393f7945e26d6f32c 00:17:40.384 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12e2ace5579f435393f7945e26d6f32c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.384 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:40.384 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:40.384 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:40.384 [ 1]:0x2 00:17:40.384 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:40.384 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.384 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=61a6ff85a4734b869e023c4b12fa92eb 00:17:40.384 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 61a6ff85a4734b869e023c4b12fa92eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.384 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:40.642 [ 0]:0x2 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=61a6ff85a4734b869e023c4b12fa92eb 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 61a6ff85a4734b869e023c4b12fa92eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:40.642 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.899 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:40.899 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:40.899 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 086901d0-b76f-4e49-b67d-9e5f23637876 -a 10.0.0.2 -s 4420 -i 4 00:17:41.157 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:41.157 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:41.157 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:41.157 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:41.157 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:41.157 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.683 [ 0]:0x1 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12e2ace5579f435393f7945e26d6f32c 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12e2ace5579f435393f7945e26d6f32c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.683 [ 1]:0x2 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.683 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.684 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=61a6ff85a4734b869e023c4b12fa92eb 00:17:43.684 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 61a6ff85a4734b869e023c4b12fa92eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.684 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.684 [ 0]:0x2 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=61a6ff85a4734b869e023c4b12fa92eb 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 61a6ff85a4734b869e023c4b12fa92eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:43.684 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:43.941 [2024-12-07 09:52:12.449519] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:43.941 request: 00:17:43.941 { 00:17:43.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.941 "nsid": 2, 00:17:43.941 "host": "nqn.2016-06.io.spdk:host1", 00:17:43.941 "method": "nvmf_ns_remove_host", 00:17:43.941 "req_id": 1 00:17:43.941 } 00:17:43.941 Got JSON-RPC error response 00:17:43.941 response: 00:17:43.941 { 00:17:43.941 "code": -32602, 00:17:43.941 "message": "Invalid parameters" 00:17:43.941 } 00:17:43.941 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:43.941 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.941 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.941 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.941 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:43.941 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:43.941 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:43.941 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.942 [ 0]:0x2 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=61a6ff85a4734b869e023c4b12fa92eb 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 61a6ff85a4734b869e023c4b12fa92eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:43.942 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:44.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.200 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1225324 00:17:44.200 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.200 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1225324 /var/tmp/host.sock 00:17:44.200 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:44.200 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1225324 ']' 00:17:44.200 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:44.200 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.200 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:44.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:44.200 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.200 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:44.200 [2024-12-07 09:52:12.817346] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:44.200 [2024-12-07 09:52:12.817392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225324 ] 00:17:44.200 [2024-12-07 09:52:12.871374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.200 [2024-12-07 09:52:12.910709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.458 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:44.458 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:44.458 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:44.715 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:44.973 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c4581eca-390f-4475-9dc1-ce967e9fc7cb 00:17:44.973 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:17:44.973 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C4581ECA390F44759DC1CE967E9FC7CB -i 00:17:44.973 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 52657872-2788-4c28-b08d-76e86285151d 00:17:44.973 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:17:44.973 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5265787227884C28B08D76E86285151D -i 00:17:45.230 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:45.488 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:45.746 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:45.746 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:46.005 nvme0n1 00:17:46.005 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:46.005 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:46.264 nvme1n2 00:17:46.264 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:46.264 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:46.264 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:46.264 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:46.264 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:46.520 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:46.520 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:46.520 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:46.520 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:46.777 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c4581eca-390f-4475-9dc1-ce967e9fc7cb == \c\4\5\8\1\e\c\a\-\3\9\0\f\-\4\4\7\5\-\9\d\c\1\-\c\e\9\6\7\e\9\f\c\7\c\b ]] 00:17:46.777 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:46.777 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:46.777 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:47.036 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 52657872-2788-4c28-b08d-76e86285151d == \5\2\6\5\7\8\7\2\-\2\7\8\8\-\4\c\2\8\-\b\0\8\d\-\7\6\e\8\6\2\8\5\1\5\1\d ]] 00:17:47.036 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1225324 00:17:47.036 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1225324 ']' 00:17:47.036 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1225324 00:17:47.036 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:47.036 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:47.036 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1225324 00:17:47.036 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:47.036 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:47.036 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1225324' 00:17:47.036 killing process with pid 1225324 00:17:47.036 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1225324 00:17:47.036 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1225324 00:17:47.295 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:47.552 rmmod nvme_tcp 00:17:47.552 rmmod nvme_fabrics 00:17:47.552 rmmod nvme_keyring 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 1223335 ']' 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 1223335 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1223335 ']' 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1223335 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1223335 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1223335' 00:17:47.552 killing process with pid 1223335 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1223335 00:17:47.552 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1223335 00:17:47.810 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:47.810 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:47.810 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:47.810 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:47.810 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:47.810 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:17:47.810 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:17:47.810 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:47.810 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:47.810 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.810 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.810 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.341 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:50.341 00:17:50.341 real 0m22.274s 00:17:50.341 user 0m23.810s 00:17:50.341 sys 0m6.304s 00:17:50.341 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.341 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:50.341 ************************************ 00:17:50.341 END TEST nvmf_ns_masking 00:17:50.341 ************************************ 00:17:50.341 09:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:50.341 09:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:50.341 09:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:50.341 09:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.341 09:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:50.341 ************************************ 00:17:50.341 START TEST nvmf_nvme_cli 00:17:50.341 ************************************ 00:17:50.341 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:50.341 * Looking for test storage... 00:17:50.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:50.341 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.342 --rc genhtml_branch_coverage=1 00:17:50.342 --rc genhtml_function_coverage=1 00:17:50.342 --rc genhtml_legend=1 00:17:50.342 --rc geninfo_all_blocks=1 00:17:50.342 --rc geninfo_unexecuted_blocks=1 00:17:50.342 00:17:50.342 ' 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.342 --rc genhtml_branch_coverage=1 00:17:50.342 --rc genhtml_function_coverage=1 00:17:50.342 --rc genhtml_legend=1 00:17:50.342 --rc geninfo_all_blocks=1 00:17:50.342 --rc geninfo_unexecuted_blocks=1 00:17:50.342 00:17:50.342 ' 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.342 --rc genhtml_branch_coverage=1 00:17:50.342 --rc genhtml_function_coverage=1 00:17:50.342 --rc genhtml_legend=1 00:17:50.342 --rc geninfo_all_blocks=1 00:17:50.342 --rc geninfo_unexecuted_blocks=1 00:17:50.342 00:17:50.342 ' 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.342 --rc genhtml_branch_coverage=1 00:17:50.342 --rc genhtml_function_coverage=1 00:17:50.342 --rc genhtml_legend=1 00:17:50.342 --rc geninfo_all_blocks=1 00:17:50.342 --rc geninfo_unexecuted_blocks=1 00:17:50.342 00:17:50.342 ' 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:50.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:50.342 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:50.343 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:50.343 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.343 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:50.343 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:50.343 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:50.343 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.343 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.343 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.343 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:50.343 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:50.343 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:50.343 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.605 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:55.606 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:55.606 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:55.606 Found net devices under 0000:86:00.0: cvl_0_0 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:55.606 Found net devices under 0000:86:00.1: cvl_0_1 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:55.606 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:55.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:17:55.864 00:17:55.864 --- 10.0.0.2 ping statistics --- 00:17:55.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.864 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:55.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:17:55.864 00:17:55.864 --- 10.0.0.1 ping statistics --- 00:17:55.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.864 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=1229348 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 1229348 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1229348 ']' 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:55.864 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:55.864 [2024-12-07 09:52:24.534470] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:55.864 [2024-12-07 09:52:24.534515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.123 [2024-12-07 09:52:24.591233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:56.123 [2024-12-07 09:52:24.635048] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.123 [2024-12-07 09:52:24.635089] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.123 [2024-12-07 09:52:24.635096] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.123 [2024-12-07 09:52:24.635102] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.123 [2024-12-07 09:52:24.635107] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.123 [2024-12-07 09:52:24.635152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.123 [2024-12-07 09:52:24.635255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.123 [2024-12-07 09:52:24.635333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:56.123 [2024-12-07 09:52:24.635334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.123 [2024-12-07 09:52:24.783126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.123 Malloc0 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.123 Malloc1 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.123 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.382 [2024-12-07 09:52:24.863465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.382 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:17:56.382 00:17:56.382 Discovery Log Number of Records 2, Generation counter 2 00:17:56.382 =====Discovery Log Entry 0====== 00:17:56.382 trtype: tcp 00:17:56.382 adrfam: ipv4 00:17:56.382 subtype: current discovery subsystem 00:17:56.382 treq: not required 00:17:56.382 portid: 0 00:17:56.382 trsvcid: 4420 00:17:56.382 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:56.382 traddr: 10.0.0.2 00:17:56.382 eflags: explicit discovery connections, duplicate discovery information 00:17:56.383 sectype: none 00:17:56.383 =====Discovery Log Entry 1====== 00:17:56.383 trtype: tcp 00:17:56.383 adrfam: ipv4 00:17:56.383 subtype: nvme subsystem 00:17:56.383 treq: not required 00:17:56.383 portid: 0 00:17:56.383 trsvcid: 4420 00:17:56.383 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:56.383 traddr: 10.0.0.2 00:17:56.383 eflags: none 00:17:56.383 sectype: none 00:17:56.383 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:56.383 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:56.383 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:17:56.383 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:56.383 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:17:56.383 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:17:56.383 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:56.383 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:17:56.383 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:56.383 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:56.383 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:57.752 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:57.752 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:57.752 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:57.752 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:57.752 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:57.752 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:59.653 /dev/nvme0n2 ]] 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:59.653 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:17:59.912 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:17:59.912 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:59.912 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:17:59.912 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:59.912 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:59.912 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:17:59.912 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:59.912 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:59.912 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:17:59.912 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:17:59.912 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:59.912 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:00.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:00.171 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:00.171 rmmod nvme_tcp 00:18:00.171 rmmod nvme_fabrics 00:18:00.171 rmmod nvme_keyring 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 1229348 ']' 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 1229348 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1229348 ']' 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1229348 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1229348 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1229348' 00:18:00.430 killing process with pid 1229348 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1229348 00:18:00.430 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1229348 00:18:00.689 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:00.689 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:00.689 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:00.689 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:00.689 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:00.689 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:18:00.689 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:18:00.689 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.689 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:00.689 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.689 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.689 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.591 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:02.591 00:18:02.591 real 0m12.675s 00:18:02.591 user 0m19.759s 00:18:02.591 sys 0m4.831s 00:18:02.591 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:02.591 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:02.591 ************************************ 00:18:02.591 END TEST nvmf_nvme_cli 00:18:02.591 ************************************ 00:18:02.591 09:52:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:02.591 09:52:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:02.591 09:52:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:02.591 09:52:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:02.591 09:52:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.850 ************************************ 00:18:02.850 START TEST nvmf_vfio_user 00:18:02.850 ************************************ 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:02.850 * Looking for test storage... 00:18:02.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:02.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.850 --rc genhtml_branch_coverage=1 00:18:02.850 --rc genhtml_function_coverage=1 00:18:02.850 --rc genhtml_legend=1 00:18:02.850 --rc geninfo_all_blocks=1 00:18:02.850 --rc geninfo_unexecuted_blocks=1 00:18:02.850 00:18:02.850 ' 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:02.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.850 --rc genhtml_branch_coverage=1 00:18:02.850 --rc genhtml_function_coverage=1 00:18:02.850 --rc genhtml_legend=1 00:18:02.850 --rc geninfo_all_blocks=1 00:18:02.850 --rc geninfo_unexecuted_blocks=1 00:18:02.850 00:18:02.850 ' 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:02.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.850 --rc genhtml_branch_coverage=1 00:18:02.850 --rc genhtml_function_coverage=1 00:18:02.850 --rc genhtml_legend=1 00:18:02.850 --rc geninfo_all_blocks=1 00:18:02.850 --rc geninfo_unexecuted_blocks=1 00:18:02.850 00:18:02.850 ' 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:02.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.850 --rc genhtml_branch_coverage=1 00:18:02.850 --rc genhtml_function_coverage=1 00:18:02.850 --rc genhtml_legend=1 00:18:02.850 --rc geninfo_all_blocks=1 00:18:02.850 --rc geninfo_unexecuted_blocks=1 00:18:02.850 00:18:02.850 ' 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.850 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1230636 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1230636' 00:18:02.851 Process pid: 1230636 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1230636 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1230636 ']' 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.851 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:03.109 [2024-12-07 09:52:31.597189] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:03.109 [2024-12-07 09:52:31.597238] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.109 [2024-12-07 09:52:31.650111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:03.109 [2024-12-07 09:52:31.691898] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.109 [2024-12-07 09:52:31.691939] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.109 [2024-12-07 09:52:31.691949] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.109 [2024-12-07 09:52:31.691956] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.109 [2024-12-07 09:52:31.691961] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.110 [2024-12-07 09:52:31.692009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.110 [2024-12-07 09:52:31.692107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.110 [2024-12-07 09:52:31.692173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.110 [2024-12-07 09:52:31.692175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.110 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.110 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:03.110 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:04.482 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:04.482 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:04.482 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:04.482 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:04.482 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:04.482 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:04.482 Malloc1 00:18:04.739 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:04.739 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:04.995 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:05.252 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:05.252 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:05.252 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:05.509 Malloc2 00:18:05.509 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:05.509 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:05.780 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:06.041 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:06.041 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:06.041 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:06.041 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:06.041 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:06.041 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:06.041 [2024-12-07 09:52:34.628452] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:06.041 [2024-12-07 09:52:34.628477] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231120 ] 00:18:06.041 [2024-12-07 09:52:34.652508] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:06.041 [2024-12-07 09:52:34.665259] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:06.041 [2024-12-07 09:52:34.665278] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2c20707000 00:18:06.041 [2024-12-07 09:52:34.666256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:06.041 [2024-12-07 09:52:34.667255] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:06.041 [2024-12-07 09:52:34.668260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:06.041 [2024-12-07 09:52:34.669266] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:06.041 [2024-12-07 09:52:34.670268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:06.041 [2024-12-07 09:52:34.671275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:06.041 [2024-12-07 09:52:34.672276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:06.041 [2024-12-07 09:52:34.673283] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:06.042 [2024-12-07 09:52:34.674291] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:06.042 [2024-12-07 09:52:34.674300] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2c1f410000 00:18:06.042 [2024-12-07 09:52:34.675237] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:06.042 [2024-12-07 09:52:34.684777] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:06.042 [2024-12-07 09:52:34.684802] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:18:06.042 [2024-12-07 09:52:34.689381] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:06.042 [2024-12-07 09:52:34.689417] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:06.042 [2024-12-07 09:52:34.689490] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:18:06.042 [2024-12-07 09:52:34.689506] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:18:06.042 [2024-12-07 09:52:34.689511] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:18:06.042 [2024-12-07 09:52:34.690384] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:06.042 [2024-12-07 09:52:34.690392] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:18:06.042 [2024-12-07 09:52:34.690398] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:18:06.042 [2024-12-07 09:52:34.691385] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:06.042 [2024-12-07 09:52:34.691392] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:18:06.042 [2024-12-07 09:52:34.691399] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:18:06.042 [2024-12-07 09:52:34.692389] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:06.042 [2024-12-07 09:52:34.692397] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:06.042 [2024-12-07 09:52:34.693396] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:06.042 [2024-12-07 09:52:34.693404] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:18:06.042 [2024-12-07 09:52:34.693409] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:18:06.042 [2024-12-07 09:52:34.693414] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:06.042 [2024-12-07 09:52:34.693519] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:18:06.042 [2024-12-07 09:52:34.693524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:06.042 [2024-12-07 09:52:34.693528] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:06.042 [2024-12-07 09:52:34.694407] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:06.042 [2024-12-07 09:52:34.695412] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:06.042 [2024-12-07 09:52:34.696423] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:06.042 [2024-12-07 09:52:34.697421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:06.042 [2024-12-07 09:52:34.697488] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:06.042 [2024-12-07 09:52:34.698443] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:06.042 [2024-12-07 09:52:34.698451] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:06.042 [2024-12-07 09:52:34.698455] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:18:06.042 [2024-12-07 09:52:34.698472] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:18:06.042 [2024-12-07 09:52:34.698479] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:18:06.042 [2024-12-07 09:52:34.698492] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:06.042 [2024-12-07 09:52:34.698496] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:06.042 [2024-12-07 09:52:34.698500] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:06.042 [2024-12-07 09:52:34.698513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:06.042 [2024-12-07 09:52:34.698548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:06.042 [2024-12-07 09:52:34.698557] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:18:06.042 [2024-12-07 09:52:34.698562] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:18:06.042 [2024-12-07 09:52:34.698565] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:18:06.042 [2024-12-07 09:52:34.698570] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:06.042 [2024-12-07 09:52:34.698574] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:18:06.042 [2024-12-07 09:52:34.698579] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:18:06.042 [2024-12-07 09:52:34.698583] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:18:06.042 [2024-12-07 09:52:34.698590] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:18:06.042 [2024-12-07 09:52:34.698599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:06.042 [2024-12-07 09:52:34.698614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:06.042 [2024-12-07 09:52:34.698624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:06.042 [2024-12-07 09:52:34.698632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:06.042 [2024-12-07 09:52:34.698640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:06.042 [2024-12-07 09:52:34.698647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:06.042 [2024-12-07 09:52:34.698652] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:06.042 [2024-12-07 09:52:34.698659] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:06.042 [2024-12-07 09:52:34.698669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:06.042 [2024-12-07 09:52:34.698677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:06.042 [2024-12-07 09:52:34.698683] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:18:06.042 [2024-12-07 09:52:34.698687] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:06.042 [2024-12-07 09:52:34.698693] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:18:06.042 [2024-12-07 09:52:34.698702] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:06.042 [2024-12-07 09:52:34.698710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:06.042 [2024-12-07 09:52:34.698722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:06.042 [2024-12-07 09:52:34.698774] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:18:06.042 [2024-12-07 09:52:34.698781] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:06.042 [2024-12-07 09:52:34.698788] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:06.042 [2024-12-07 09:52:34.698792] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:06.043 [2024-12-07 09:52:34.698795] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:06.043 [2024-12-07 09:52:34.698801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:06.043 [2024-12-07 09:52:34.698813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:06.043 [2024-12-07 09:52:34.698821] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:18:06.043 [2024-12-07 09:52:34.698831] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:18:06.043 [2024-12-07 09:52:34.698838] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:18:06.043 [2024-12-07 09:52:34.698845] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:06.043 [2024-12-07 09:52:34.698848] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:06.043 [2024-12-07 09:52:34.698851] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:06.043 [2024-12-07 09:52:34.698857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:06.043 [2024-12-07 09:52:34.698878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:06.043 [2024-12-07 09:52:34.698889] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:06.043 [2024-12-07 09:52:34.698896] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:06.043 [2024-12-07 09:52:34.698904] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:06.043 [2024-12-07 09:52:34.698908] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:06.043 [2024-12-07 09:52:34.698911] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:06.043 [2024-12-07 09:52:34.698916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:06.043 [2024-12-07 09:52:34.698927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:06.043 [2024-12-07 09:52:34.698935] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:06.043 [2024-12-07 09:52:34.698941] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:18:06.043 [2024-12-07 09:52:34.698952] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:18:06.043 [2024-12-07 09:52:34.698958] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:06.043 [2024-12-07 09:52:34.698963] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:06.043 [2024-12-07 09:52:34.698967] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:18:06.043 [2024-12-07 09:52:34.698972] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:18:06.043 [2024-12-07 09:52:34.698976] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:18:06.043 [2024-12-07 09:52:34.698981] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:18:06.043 [2024-12-07 09:52:34.698997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:06.043 [2024-12-07 09:52:34.699007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:06.043 [2024-12-07 09:52:34.699017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:06.043 [2024-12-07 09:52:34.699030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:06.043 [2024-12-07 09:52:34.699039] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:06.043 [2024-12-07 09:52:34.699047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:06.043 [2024-12-07 09:52:34.699057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:06.043 [2024-12-07 09:52:34.699069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:06.043 [2024-12-07 09:52:34.699081] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:06.043 [2024-12-07 09:52:34.699085] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:06.043 [2024-12-07 09:52:34.699088] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:06.043 [2024-12-07 09:52:34.699091] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:06.043 [2024-12-07 09:52:34.699094] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:06.043 [2024-12-07 09:52:34.699101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:06.043 [2024-12-07 09:52:34.699108] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:06.043 [2024-12-07 09:52:34.699112] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:06.043 [2024-12-07 09:52:34.699115] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:06.043 [2024-12-07 09:52:34.699120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:06.043 [2024-12-07 09:52:34.699126] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:06.043 [2024-12-07 09:52:34.699130] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:06.043 [2024-12-07 09:52:34.699133] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:06.043 [2024-12-07 09:52:34.699138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:06.043 [2024-12-07 09:52:34.699145] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:06.043 [2024-12-07 09:52:34.699149] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:06.043 [2024-12-07 09:52:34.699152] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:06.043 [2024-12-07 09:52:34.699157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:06.043 [2024-12-07 09:52:34.699163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:06.043 [2024-12-07 09:52:34.699174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:06.043 [2024-12-07 09:52:34.699183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:06.043 [2024-12-07 09:52:34.699189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:06.043 ===================================================== 00:18:06.043 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:06.043 ===================================================== 00:18:06.043 Controller Capabilities/Features 00:18:06.043 ================================ 00:18:06.043 Vendor ID: 4e58 00:18:06.043 Subsystem Vendor ID: 4e58 00:18:06.043 Serial Number: SPDK1 00:18:06.043 Model Number: SPDK bdev Controller 00:18:06.043 Firmware Version: 24.09.1 00:18:06.043 Recommended Arb Burst: 6 00:18:06.043 IEEE OUI Identifier: 8d 6b 50 00:18:06.043 Multi-path I/O 00:18:06.043 May have multiple subsystem ports: Yes 00:18:06.044 May have multiple controllers: Yes 00:18:06.044 Associated with SR-IOV VF: No 00:18:06.044 Max Data Transfer Size: 131072 00:18:06.044 Max Number of Namespaces: 32 00:18:06.044 Max Number of I/O Queues: 127 00:18:06.044 NVMe Specification Version (VS): 1.3 00:18:06.044 NVMe Specification Version (Identify): 1.3 00:18:06.044 Maximum Queue Entries: 256 00:18:06.044 Contiguous Queues Required: Yes 00:18:06.044 Arbitration Mechanisms Supported 00:18:06.044 Weighted Round Robin: Not Supported 00:18:06.044 Vendor Specific: Not Supported 00:18:06.044 Reset Timeout: 15000 ms 00:18:06.044 Doorbell Stride: 4 bytes 00:18:06.044 NVM Subsystem Reset: Not Supported 00:18:06.044 Command Sets Supported 00:18:06.044 NVM Command Set: Supported 00:18:06.044 Boot Partition: Not Supported 00:18:06.044 Memory Page Size Minimum: 4096 bytes 00:18:06.044 Memory Page Size Maximum: 4096 bytes 00:18:06.044 Persistent Memory Region: Not Supported 00:18:06.044 Optional Asynchronous Events Supported 00:18:06.044 Namespace Attribute Notices: Supported 00:18:06.044 Firmware Activation Notices: Not Supported 00:18:06.044 ANA Change Notices: Not Supported 00:18:06.044 PLE Aggregate Log Change Notices: Not Supported 00:18:06.044 LBA Status Info Alert Notices: Not Supported 00:18:06.044 EGE Aggregate Log Change Notices: Not Supported 00:18:06.044 Normal NVM Subsystem Shutdown event: Not Supported 00:18:06.044 Zone Descriptor Change Notices: Not Supported 00:18:06.044 Discovery Log Change Notices: Not Supported 00:18:06.044 Controller Attributes 00:18:06.044 128-bit Host Identifier: Supported 00:18:06.044 Non-Operational Permissive Mode: Not Supported 00:18:06.044 NVM Sets: Not Supported 00:18:06.044 Read Recovery Levels: Not Supported 00:18:06.044 Endurance Groups: Not Supported 00:18:06.044 Predictable Latency Mode: Not Supported 00:18:06.044 Traffic Based Keep ALive: Not Supported 00:18:06.044 Namespace Granularity: Not Supported 00:18:06.044 SQ Associations: Not Supported 00:18:06.044 UUID List: Not Supported 00:18:06.044 Multi-Domain Subsystem: Not Supported 00:18:06.044 Fixed Capacity Management: Not Supported 00:18:06.044 Variable Capacity Management: Not Supported 00:18:06.044 Delete Endurance Group: Not Supported 00:18:06.044 Delete NVM Set: Not Supported 00:18:06.044 Extended LBA Formats Supported: Not Supported 00:18:06.044 Flexible Data Placement Supported: Not Supported 00:18:06.044 00:18:06.044 Controller Memory Buffer Support 00:18:06.044 ================================ 00:18:06.044 Supported: No 00:18:06.044 00:18:06.044 Persistent Memory Region Support 00:18:06.044 ================================ 00:18:06.044 Supported: No 00:18:06.044 00:18:06.044 Admin Command Set Attributes 00:18:06.044 ============================ 00:18:06.044 Security Send/Receive: Not Supported 00:18:06.044 Format NVM: Not Supported 00:18:06.044 Firmware Activate/Download: Not Supported 00:18:06.044 Namespace Management: Not Supported 00:18:06.044 Device Self-Test: Not Supported 00:18:06.044 Directives: Not Supported 00:18:06.044 NVMe-MI: Not Supported 00:18:06.044 Virtualization Management: Not Supported 00:18:06.044 Doorbell Buffer Config: Not Supported 00:18:06.044 Get LBA Status Capability: Not Supported 00:18:06.044 Command & Feature Lockdown Capability: Not Supported 00:18:06.044 Abort Command Limit: 4 00:18:06.044 Async Event Request Limit: 4 00:18:06.044 Number of Firmware Slots: N/A 00:18:06.044 Firmware Slot 1 Read-Only: N/A 00:18:06.044 Firmware Activation Without Reset: N/A 00:18:06.044 Multiple Update Detection Support: N/A 00:18:06.044 Firmware Update Granularity: No Information Provided 00:18:06.044 Per-Namespace SMART Log: No 00:18:06.044 Asymmetric Namespace Access Log Page: Not Supported 00:18:06.044 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:06.044 Command Effects Log Page: Supported 00:18:06.044 Get Log Page Extended Data: Supported 00:18:06.044 Telemetry Log Pages: Not Supported 00:18:06.044 Persistent Event Log Pages: Not Supported 00:18:06.044 Supported Log Pages Log Page: May Support 00:18:06.044 Commands Supported & Effects Log Page: Not Supported 00:18:06.044 Feature Identifiers & Effects Log Page:May Support 00:18:06.044 NVMe-MI Commands & Effects Log Page: May Support 00:18:06.044 Data Area 4 for Telemetry Log: Not Supported 00:18:06.044 Error Log Page Entries Supported: 128 00:18:06.044 Keep Alive: Supported 00:18:06.044 Keep Alive Granularity: 10000 ms 00:18:06.044 00:18:06.044 NVM Command Set Attributes 00:18:06.044 ========================== 00:18:06.044 Submission Queue Entry Size 00:18:06.044 Max: 64 00:18:06.044 Min: 64 00:18:06.044 Completion Queue Entry Size 00:18:06.044 Max: 16 00:18:06.044 Min: 16 00:18:06.044 Number of Namespaces: 32 00:18:06.044 Compare Command: Supported 00:18:06.044 Write Uncorrectable Command: Not Supported 00:18:06.044 Dataset Management Command: Supported 00:18:06.044 Write Zeroes Command: Supported 00:18:06.044 Set Features Save Field: Not Supported 00:18:06.044 Reservations: Not Supported 00:18:06.044 Timestamp: Not Supported 00:18:06.044 Copy: Supported 00:18:06.044 Volatile Write Cache: Present 00:18:06.044 Atomic Write Unit (Normal): 1 00:18:06.044 Atomic Write Unit (PFail): 1 00:18:06.044 Atomic Compare & Write Unit: 1 00:18:06.044 Fused Compare & Write: Supported 00:18:06.044 Scatter-Gather List 00:18:06.044 SGL Command Set: Supported (Dword aligned) 00:18:06.044 SGL Keyed: Not Supported 00:18:06.044 SGL Bit Bucket Descriptor: Not Supported 00:18:06.044 SGL Metadata Pointer: Not Supported 00:18:06.044 Oversized SGL: Not Supported 00:18:06.044 SGL Metadata Address: Not Supported 00:18:06.044 SGL Offset: Not Supported 00:18:06.044 Transport SGL Data Block: Not Supported 00:18:06.044 Replay Protected Memory Block: Not Supported 00:18:06.044 00:18:06.044 Firmware Slot Information 00:18:06.044 ========================= 00:18:06.044 Active slot: 1 00:18:06.044 Slot 1 Firmware Revision: 24.09.1 00:18:06.044 00:18:06.044 00:18:06.044 Commands Supported and Effects 00:18:06.044 ============================== 00:18:06.044 Admin Commands 00:18:06.044 -------------- 00:18:06.044 Get Log Page (02h): Supported 00:18:06.044 Identify (06h): Supported 00:18:06.044 Abort (08h): Supported 00:18:06.044 Set Features (09h): Supported 00:18:06.044 Get Features (0Ah): Supported 00:18:06.044 Asynchronous Event Request (0Ch): Supported 00:18:06.044 Keep Alive (18h): Supported 00:18:06.044 I/O Commands 00:18:06.044 ------------ 00:18:06.044 Flush (00h): Supported LBA-Change 00:18:06.044 Write (01h): Supported LBA-Change 00:18:06.044 Read (02h): Supported 00:18:06.044 Compare (05h): Supported 00:18:06.044 Write Zeroes (08h): Supported LBA-Change 00:18:06.044 Dataset Management (09h): Supported LBA-Change 00:18:06.045 Copy (19h): Supported LBA-Change 00:18:06.045 00:18:06.045 Error Log 00:18:06.045 ========= 00:18:06.045 00:18:06.045 Arbitration 00:18:06.045 =========== 00:18:06.045 Arbitration Burst: 1 00:18:06.045 00:18:06.045 Power Management 00:18:06.045 ================ 00:18:06.045 Number of Power States: 1 00:18:06.045 Current Power State: Power State #0 00:18:06.045 Power State #0: 00:18:06.045 Max Power: 0.00 W 00:18:06.045 Non-Operational State: Operational 00:18:06.045 Entry Latency: Not Reported 00:18:06.045 Exit Latency: Not Reported 00:18:06.045 Relative Read Throughput: 0 00:18:06.045 Relative Read Latency: 0 00:18:06.045 Relative Write Throughput: 0 00:18:06.045 Relative Write Latency: 0 00:18:06.045 Idle Power: Not Reported 00:18:06.045 Active Power: Not Reported 00:18:06.045 Non-Operational Permissive Mode: Not Supported 00:18:06.045 00:18:06.045 Health Information 00:18:06.045 ================== 00:18:06.045 Critical Warnings: 00:18:06.045 Available Spare Space: OK 00:18:06.045 Temperature: OK 00:18:06.045 Device Reliability: OK 00:18:06.045 Read Only: No 00:18:06.045 Volatile Memory Backup: OK 00:18:06.045 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:06.045 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:06.045 Available Spare: 0% 00:18:06.045 Availabl[2024-12-07 09:52:34.699278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:06.045 [2024-12-07 09:52:34.699290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:06.045 [2024-12-07 09:52:34.699316] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:18:06.045 [2024-12-07 09:52:34.699324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.045 [2024-12-07 09:52:34.699330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.045 [2024-12-07 09:52:34.699336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.045 [2024-12-07 09:52:34.699341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.045 [2024-12-07 09:52:34.702955] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:06.045 [2024-12-07 09:52:34.702966] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:06.045 [2024-12-07 09:52:34.703467] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:06.045 [2024-12-07 09:52:34.703517] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:18:06.045 [2024-12-07 09:52:34.703526] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:18:06.045 [2024-12-07 09:52:34.704492] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:06.045 [2024-12-07 09:52:34.704503] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:18:06.045 [2024-12-07 09:52:34.704557] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:06.045 [2024-12-07 09:52:34.706561] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:06.045 e Spare Threshold: 0% 00:18:06.045 Life Percentage Used: 0% 00:18:06.045 Data Units Read: 0 00:18:06.045 Data Units Written: 0 00:18:06.045 Host Read Commands: 0 00:18:06.045 Host Write Commands: 0 00:18:06.045 Controller Busy Time: 0 minutes 00:18:06.045 Power Cycles: 0 00:18:06.045 Power On Hours: 0 hours 00:18:06.045 Unsafe Shutdowns: 0 00:18:06.045 Unrecoverable Media Errors: 0 00:18:06.045 Lifetime Error Log Entries: 0 00:18:06.045 Warning Temperature Time: 0 minutes 00:18:06.045 Critical Temperature Time: 0 minutes 00:18:06.045 00:18:06.045 Number of Queues 00:18:06.045 ================ 00:18:06.045 Number of I/O Submission Queues: 127 00:18:06.045 Number of I/O Completion Queues: 127 00:18:06.045 00:18:06.045 Active Namespaces 00:18:06.045 ================= 00:18:06.045 Namespace ID:1 00:18:06.045 Error Recovery Timeout: Unlimited 00:18:06.045 Command Set Identifier: NVM (00h) 00:18:06.045 Deallocate: Supported 00:18:06.045 Deallocated/Unwritten Error: Not Supported 00:18:06.045 Deallocated Read Value: Unknown 00:18:06.045 Deallocate in Write Zeroes: Not Supported 00:18:06.045 Deallocated Guard Field: 0xFFFF 00:18:06.045 Flush: Supported 00:18:06.045 Reservation: Supported 00:18:06.045 Namespace Sharing Capabilities: Multiple Controllers 00:18:06.045 Size (in LBAs): 131072 (0GiB) 00:18:06.045 Capacity (in LBAs): 131072 (0GiB) 00:18:06.045 Utilization (in LBAs): 131072 (0GiB) 00:18:06.045 NGUID: 991F02B7ED1547798721D32E779A8EE5 00:18:06.045 UUID: 991f02b7-ed15-4779-8721-d32e779a8ee5 00:18:06.045 Thin Provisioning: Not Supported 00:18:06.045 Per-NS Atomic Units: Yes 00:18:06.045 Atomic Boundary Size (Normal): 0 00:18:06.045 Atomic Boundary Size (PFail): 0 00:18:06.045 Atomic Boundary Offset: 0 00:18:06.045 Maximum Single Source Range Length: 65535 00:18:06.045 Maximum Copy Length: 65535 00:18:06.045 Maximum Source Range Count: 1 00:18:06.045 NGUID/EUI64 Never Reused: No 00:18:06.045 Namespace Write Protected: No 00:18:06.045 Number of LBA Formats: 1 00:18:06.045 Current LBA Format: LBA Format #00 00:18:06.045 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:06.045 00:18:06.045 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:06.304 [2024-12-07 09:52:34.918790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:11.566 Initializing NVMe Controllers 00:18:11.566 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:11.566 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:11.566 Initialization complete. Launching workers. 00:18:11.566 ======================================================== 00:18:11.566 Latency(us) 00:18:11.566 Device Information : IOPS MiB/s Average min max 00:18:11.566 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39980.18 156.17 3201.90 981.29 7134.33 00:18:11.566 ======================================================== 00:18:11.566 Total : 39980.18 156.17 3201.90 981.29 7134.33 00:18:11.566 00:18:11.566 [2024-12-07 09:52:39.940116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:11.566 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:11.566 [2024-12-07 09:52:40.163188] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:16.826 Initializing NVMe Controllers 00:18:16.826 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:16.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:16.826 Initialization complete. Launching workers. 00:18:16.826 ======================================================== 00:18:16.826 Latency(us) 00:18:16.826 Device Information : IOPS MiB/s Average min max 00:18:16.826 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.45 62.72 7976.72 7019.99 8067.74 00:18:16.826 ======================================================== 00:18:16.826 Total : 16057.45 62.72 7976.72 7019.99 8067.74 00:18:16.826 00:18:16.826 [2024-12-07 09:52:45.205641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:16.826 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:16.826 [2024-12-07 09:52:45.395565] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:22.088 [2024-12-07 09:52:50.465264] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:22.088 Initializing NVMe Controllers 00:18:22.088 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:22.088 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:22.088 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:22.088 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:22.088 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:22.088 Initialization complete. Launching workers. 00:18:22.088 Starting thread on core 2 00:18:22.088 Starting thread on core 3 00:18:22.088 Starting thread on core 1 00:18:22.088 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:22.088 [2024-12-07 09:52:50.753352] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:25.375 [2024-12-07 09:52:53.813878] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:25.375 Initializing NVMe Controllers 00:18:25.375 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:25.375 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:25.375 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:25.375 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:25.375 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:25.375 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:25.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:25.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:25.375 Initialization complete. Launching workers. 00:18:25.375 Starting thread on core 1 with urgent priority queue 00:18:25.375 Starting thread on core 2 with urgent priority queue 00:18:25.375 Starting thread on core 3 with urgent priority queue 00:18:25.375 Starting thread on core 0 with urgent priority queue 00:18:25.375 SPDK bdev Controller (SPDK1 ) core 0: 9662.00 IO/s 10.35 secs/100000 ios 00:18:25.375 SPDK bdev Controller (SPDK1 ) core 1: 7912.00 IO/s 12.64 secs/100000 ios 00:18:25.375 SPDK bdev Controller (SPDK1 ) core 2: 8240.67 IO/s 12.13 secs/100000 ios 00:18:25.375 SPDK bdev Controller (SPDK1 ) core 3: 8358.67 IO/s 11.96 secs/100000 ios 00:18:25.375 ======================================================== 00:18:25.375 00:18:25.375 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:25.375 [2024-12-07 09:52:54.087385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:25.632 Initializing NVMe Controllers 00:18:25.632 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:25.632 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:25.632 Namespace ID: 1 size: 0GB 00:18:25.632 Initialization complete. 00:18:25.632 INFO: using host memory buffer for IO 00:18:25.632 Hello world! 00:18:25.632 [2024-12-07 09:52:54.121647] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:25.632 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:25.890 [2024-12-07 09:52:54.391411] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:26.824 Initializing NVMe Controllers 00:18:26.824 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:26.824 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:26.824 Initialization complete. Launching workers. 00:18:26.824 submit (in ns) avg, min, max = 5878.6, 3226.1, 4000124.3 00:18:26.824 complete (in ns) avg, min, max = 21212.5, 1763.5, 4004169.6 00:18:26.824 00:18:26.824 Submit histogram 00:18:26.824 ================ 00:18:26.824 Range in us Cumulative Count 00:18:26.824 3.214 - 3.228: 0.0061% ( 1) 00:18:26.824 3.228 - 3.242: 0.0243% ( 3) 00:18:26.824 3.242 - 3.256: 0.0303% ( 1) 00:18:26.824 3.256 - 3.270: 0.0425% ( 2) 00:18:26.824 3.270 - 3.283: 0.1396% ( 16) 00:18:26.824 3.283 - 3.297: 1.5175% ( 227) 00:18:26.824 3.297 - 3.311: 5.7117% ( 691) 00:18:26.824 3.311 - 3.325: 11.0653% ( 882) 00:18:26.824 3.325 - 3.339: 16.7891% ( 943) 00:18:26.824 3.339 - 3.353: 23.2838% ( 1070) 00:18:26.824 3.353 - 3.367: 29.7481% ( 1065) 00:18:26.824 3.367 - 3.381: 35.0288% ( 870) 00:18:26.824 3.381 - 3.395: 40.4370% ( 891) 00:18:26.824 3.395 - 3.409: 45.0804% ( 765) 00:18:26.824 3.409 - 3.423: 49.4082% ( 713) 00:18:26.824 3.423 - 3.437: 54.0030% ( 757) 00:18:26.824 3.437 - 3.450: 60.6616% ( 1097) 00:18:26.824 3.450 - 3.464: 66.6950% ( 994) 00:18:26.824 3.464 - 3.478: 70.7557% ( 669) 00:18:26.824 3.478 - 3.492: 76.1517% ( 889) 00:18:26.824 3.492 - 3.506: 80.7162% ( 752) 00:18:26.824 3.506 - 3.520: 83.5630% ( 469) 00:18:26.824 3.520 - 3.534: 85.6085% ( 337) 00:18:26.824 3.534 - 3.548: 86.5675% ( 158) 00:18:26.824 3.548 - 3.562: 87.0835% ( 85) 00:18:26.824 3.562 - 3.590: 87.8847% ( 132) 00:18:26.824 3.590 - 3.617: 89.1472% ( 208) 00:18:26.824 3.617 - 3.645: 90.9378% ( 295) 00:18:26.824 3.645 - 3.673: 92.6373% ( 280) 00:18:26.824 3.673 - 3.701: 94.3065% ( 275) 00:18:26.824 3.701 - 3.729: 96.1032% ( 296) 00:18:26.824 3.729 - 3.757: 97.6510% ( 255) 00:18:26.824 3.757 - 3.784: 98.4097% ( 125) 00:18:26.824 3.784 - 3.812: 98.9378% ( 87) 00:18:26.824 3.812 - 3.840: 99.3809% ( 73) 00:18:26.824 3.840 - 3.868: 99.5508% ( 28) 00:18:26.824 3.868 - 3.896: 99.5933% ( 7) 00:18:26.824 3.896 - 3.923: 99.6115% ( 3) 00:18:26.824 3.923 - 3.951: 99.6176% ( 1) 00:18:26.824 3.951 - 3.979: 99.6237% ( 1) 00:18:26.824 5.231 - 5.259: 99.6297% ( 1) 00:18:26.824 5.315 - 5.343: 99.6358% ( 1) 00:18:26.824 5.454 - 5.482: 99.6419% ( 1) 00:18:26.824 5.510 - 5.537: 99.6540% ( 2) 00:18:26.824 5.649 - 5.677: 99.6601% ( 1) 00:18:26.824 5.677 - 5.704: 99.6662% ( 1) 00:18:26.824 5.704 - 5.732: 99.6783% ( 2) 00:18:26.824 5.732 - 5.760: 99.6844% ( 1) 00:18:26.824 5.816 - 5.843: 99.6965% ( 2) 00:18:26.824 5.899 - 5.927: 99.7147% ( 3) 00:18:26.824 5.927 - 5.955: 99.7208% ( 1) 00:18:26.824 6.010 - 6.038: 99.7269% ( 1) 00:18:26.824 6.150 - 6.177: 99.7390% ( 2) 00:18:26.824 6.428 - 6.456: 99.7451% ( 1) 00:18:26.824 6.483 - 6.511: 99.7511% ( 1) 00:18:26.824 6.623 - 6.650: 99.7633% ( 2) 00:18:26.824 6.650 - 6.678: 99.7693% ( 1) 00:18:26.824 6.817 - 6.845: 99.7754% ( 1) 00:18:26.824 6.873 - 6.901: 99.7876% ( 2) 00:18:26.824 6.929 - 6.957: 99.7936% ( 1) 00:18:26.824 6.957 - 6.984: 99.7997% ( 1) 00:18:26.824 7.040 - 7.068: 99.8058% ( 1) 00:18:26.824 7.096 - 7.123: 99.8118% ( 1) 00:18:26.824 7.123 - 7.179: 99.8240% ( 2) 00:18:26.824 7.235 - 7.290: 99.8361% ( 2) 00:18:26.824 7.402 - 7.457: 99.8422% ( 1) 00:18:26.824 7.513 - 7.569: 99.8543% ( 2) 00:18:26.824 7.624 - 7.680: 99.8665% ( 2) 00:18:26.824 7.680 - 7.736: 99.8786% ( 2) 00:18:26.824 7.736 - 7.791: 99.8907% ( 2) 00:18:26.824 7.847 - 7.903: 99.8968% ( 1) 00:18:26.824 8.125 - 8.181: 99.9029% ( 1) 00:18:26.824 8.237 - 8.292: 99.9090% ( 1) 00:18:26.824 8.403 - 8.459: 99.9150% ( 1) 00:18:26.824 8.849 - 8.904: 99.9211% ( 1) 00:18:26.824 11.075 - 11.130: 99.9272% ( 1) 00:18:26.824 13.412 - 13.468: 99.9332% ( 1) 00:18:26.824 13.635 - 13.690: 99.9393% ( 1) 00:18:26.824 3989.148 - 4017.642: 100.0000% ( 10) 00:18:26.824 00:18:26.824 [2024-12-07 09:52:55.410667] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:26.824 Complete histogram 00:18:26.824 ================== 00:18:26.824 Range in us Cumulative Count 00:18:26.824 1.760 - 1.767: 0.0061% ( 1) 00:18:26.824 1.767 - 1.774: 0.0668% ( 10) 00:18:26.824 1.774 - 1.781: 0.1093% ( 7) 00:18:26.824 1.781 - 1.795: 0.1882% ( 13) 00:18:26.824 1.795 - 1.809: 0.2185% ( 5) 00:18:26.824 1.809 - 1.823: 1.6024% ( 228) 00:18:26.824 1.823 - 1.837: 13.0865% ( 1892) 00:18:26.824 1.837 - 1.850: 20.6373% ( 1244) 00:18:26.824 1.850 - 1.864: 27.4416% ( 1121) 00:18:26.824 1.864 - 1.878: 65.9484% ( 6344) 00:18:26.824 1.878 - 1.892: 91.4598% ( 4203) 00:18:26.824 1.892 - 1.906: 96.3035% ( 798) 00:18:26.824 1.906 - 1.920: 97.6571% ( 223) 00:18:26.824 1.920 - 1.934: 98.0091% ( 58) 00:18:26.824 1.934 - 1.948: 98.5857% ( 95) 00:18:26.824 1.948 - 1.962: 99.0288% ( 73) 00:18:26.824 1.962 - 1.976: 99.2231% ( 32) 00:18:26.824 1.976 - 1.990: 99.2656% ( 7) 00:18:26.824 1.990 - 2.003: 99.3202% ( 9) 00:18:26.824 2.017 - 2.031: 99.3323% ( 2) 00:18:26.824 2.031 - 2.045: 99.3384% ( 1) 00:18:26.824 2.045 - 2.059: 99.3445% ( 1) 00:18:26.824 2.351 - 2.365: 99.3505% ( 1) 00:18:26.824 4.118 - 4.146: 99.3566% ( 1) 00:18:26.824 4.174 - 4.202: 99.3627% ( 1) 00:18:26.824 4.563 - 4.591: 99.3687% ( 1) 00:18:26.824 4.647 - 4.675: 99.3748% ( 1) 00:18:26.824 4.703 - 4.730: 99.3809% ( 1) 00:18:26.824 4.730 - 4.758: 99.3869% ( 1) 00:18:26.824 4.786 - 4.814: 99.3930% ( 1) 00:18:26.824 4.814 - 4.842: 99.3991% ( 1) 00:18:26.824 4.842 - 4.870: 99.4052% ( 1) 00:18:26.824 4.981 - 5.009: 99.4112% ( 1) 00:18:26.824 5.064 - 5.092: 99.4234% ( 2) 00:18:26.824 5.370 - 5.398: 99.4294% ( 1) 00:18:26.824 5.454 - 5.482: 99.4355% ( 1) 00:18:26.824 5.510 - 5.537: 99.4416% ( 1) 00:18:26.824 5.537 - 5.565: 99.4476% ( 1) 00:18:26.824 5.593 - 5.621: 99.4537% ( 1) 00:18:26.824 5.677 - 5.704: 99.4598% ( 1) 00:18:26.824 5.704 - 5.732: 99.4659% ( 1) 00:18:26.824 5.732 - 5.760: 99.4719% ( 1) 00:18:26.824 6.066 - 6.094: 99.4780% ( 1) 00:18:26.824 6.094 - 6.122: 99.4841% ( 1) 00:18:26.824 6.261 - 6.289: 99.4901% ( 1) 00:18:26.824 6.289 - 6.317: 99.5023% ( 2) 00:18:26.824 6.511 - 6.539: 99.5083% ( 1) 00:18:26.824 13.301 - 13.357: 99.5144% ( 1) 00:18:26.824 2991.861 - 3006.108: 99.5205% ( 1) 00:18:26.824 3989.148 - 4017.642: 100.0000% ( 79) 00:18:26.824 00:18:26.824 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:26.824 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:26.824 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:26.824 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:26.824 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:27.081 [ 00:18:27.081 { 00:18:27.081 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:27.081 "subtype": "Discovery", 00:18:27.081 "listen_addresses": [], 00:18:27.081 "allow_any_host": true, 00:18:27.081 "hosts": [] 00:18:27.081 }, 00:18:27.081 { 00:18:27.081 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:27.081 "subtype": "NVMe", 00:18:27.081 "listen_addresses": [ 00:18:27.081 { 00:18:27.081 "trtype": "VFIOUSER", 00:18:27.081 "adrfam": "IPv4", 00:18:27.081 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:27.081 "trsvcid": "0" 00:18:27.081 } 00:18:27.081 ], 00:18:27.081 "allow_any_host": true, 00:18:27.081 "hosts": [], 00:18:27.081 "serial_number": "SPDK1", 00:18:27.081 "model_number": "SPDK bdev Controller", 00:18:27.081 "max_namespaces": 32, 00:18:27.081 "min_cntlid": 1, 00:18:27.081 "max_cntlid": 65519, 00:18:27.081 "namespaces": [ 00:18:27.081 { 00:18:27.081 "nsid": 1, 00:18:27.081 "bdev_name": "Malloc1", 00:18:27.081 "name": "Malloc1", 00:18:27.081 "nguid": "991F02B7ED1547798721D32E779A8EE5", 00:18:27.081 "uuid": "991f02b7-ed15-4779-8721-d32e779a8ee5" 00:18:27.081 } 00:18:27.081 ] 00:18:27.081 }, 00:18:27.081 { 00:18:27.081 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:27.081 "subtype": "NVMe", 00:18:27.081 "listen_addresses": [ 00:18:27.081 { 00:18:27.081 "trtype": "VFIOUSER", 00:18:27.081 "adrfam": "IPv4", 00:18:27.081 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:27.081 "trsvcid": "0" 00:18:27.081 } 00:18:27.081 ], 00:18:27.081 "allow_any_host": true, 00:18:27.081 "hosts": [], 00:18:27.081 "serial_number": "SPDK2", 00:18:27.081 "model_number": "SPDK bdev Controller", 00:18:27.081 "max_namespaces": 32, 00:18:27.081 "min_cntlid": 1, 00:18:27.081 "max_cntlid": 65519, 00:18:27.081 "namespaces": [ 00:18:27.081 { 00:18:27.081 "nsid": 1, 00:18:27.081 "bdev_name": "Malloc2", 00:18:27.081 "name": "Malloc2", 00:18:27.081 "nguid": "E3010876024945B29D3E2D6183FF0E06", 00:18:27.081 "uuid": "e3010876-0249-45b2-9d3e-2d6183ff0e06" 00:18:27.081 } 00:18:27.081 ] 00:18:27.081 } 00:18:27.081 ] 00:18:27.081 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:27.081 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:27.081 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1234563 00:18:27.081 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:27.081 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:27.081 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:27.081 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:27.081 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:27.081 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:27.081 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:27.082 [2024-12-07 09:52:55.790403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:27.339 Malloc3 00:18:27.339 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:27.596 [2024-12-07 09:52:56.064510] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:27.596 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:27.596 Asynchronous Event Request test 00:18:27.596 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:27.596 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:27.596 Registering asynchronous event callbacks... 00:18:27.596 Starting namespace attribute notice tests for all controllers... 00:18:27.596 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:27.596 aer_cb - Changed Namespace 00:18:27.596 Cleaning up... 00:18:27.596 [ 00:18:27.596 { 00:18:27.596 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:27.596 "subtype": "Discovery", 00:18:27.597 "listen_addresses": [], 00:18:27.597 "allow_any_host": true, 00:18:27.597 "hosts": [] 00:18:27.597 }, 00:18:27.597 { 00:18:27.597 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:27.597 "subtype": "NVMe", 00:18:27.597 "listen_addresses": [ 00:18:27.597 { 00:18:27.597 "trtype": "VFIOUSER", 00:18:27.597 "adrfam": "IPv4", 00:18:27.597 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:27.597 "trsvcid": "0" 00:18:27.597 } 00:18:27.597 ], 00:18:27.597 "allow_any_host": true, 00:18:27.597 "hosts": [], 00:18:27.597 "serial_number": "SPDK1", 00:18:27.597 "model_number": "SPDK bdev Controller", 00:18:27.597 "max_namespaces": 32, 00:18:27.597 "min_cntlid": 1, 00:18:27.597 "max_cntlid": 65519, 00:18:27.597 "namespaces": [ 00:18:27.597 { 00:18:27.597 "nsid": 1, 00:18:27.597 "bdev_name": "Malloc1", 00:18:27.597 "name": "Malloc1", 00:18:27.597 "nguid": "991F02B7ED1547798721D32E779A8EE5", 00:18:27.597 "uuid": "991f02b7-ed15-4779-8721-d32e779a8ee5" 00:18:27.597 }, 00:18:27.597 { 00:18:27.597 "nsid": 2, 00:18:27.597 "bdev_name": "Malloc3", 00:18:27.597 "name": "Malloc3", 00:18:27.597 "nguid": "60821A61AC78466282B3B228F0F6D478", 00:18:27.597 "uuid": "60821a61-ac78-4662-82b3-b228f0f6d478" 00:18:27.597 } 00:18:27.597 ] 00:18:27.597 }, 00:18:27.597 { 00:18:27.597 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:27.597 "subtype": "NVMe", 00:18:27.597 "listen_addresses": [ 00:18:27.597 { 00:18:27.597 "trtype": "VFIOUSER", 00:18:27.597 "adrfam": "IPv4", 00:18:27.597 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:27.597 "trsvcid": "0" 00:18:27.597 } 00:18:27.597 ], 00:18:27.597 "allow_any_host": true, 00:18:27.597 "hosts": [], 00:18:27.597 "serial_number": "SPDK2", 00:18:27.597 "model_number": "SPDK bdev Controller", 00:18:27.597 "max_namespaces": 32, 00:18:27.597 "min_cntlid": 1, 00:18:27.597 "max_cntlid": 65519, 00:18:27.597 "namespaces": [ 00:18:27.597 { 00:18:27.597 "nsid": 1, 00:18:27.597 "bdev_name": "Malloc2", 00:18:27.597 "name": "Malloc2", 00:18:27.597 "nguid": "E3010876024945B29D3E2D6183FF0E06", 00:18:27.597 "uuid": "e3010876-0249-45b2-9d3e-2d6183ff0e06" 00:18:27.597 } 00:18:27.597 ] 00:18:27.597 } 00:18:27.597 ] 00:18:27.597 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1234563 00:18:27.597 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:27.597 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:27.597 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:27.597 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:27.856 [2024-12-07 09:52:56.322092] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:27.856 [2024-12-07 09:52:56.322140] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234786 ] 00:18:27.856 [2024-12-07 09:52:56.349269] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:27.856 [2024-12-07 09:52:56.359188] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:27.856 [2024-12-07 09:52:56.359215] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7a53f76000 00:18:27.856 [2024-12-07 09:52:56.360193] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:27.856 [2024-12-07 09:52:56.361203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:27.856 [2024-12-07 09:52:56.362205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:27.856 [2024-12-07 09:52:56.363210] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:27.856 [2024-12-07 09:52:56.364217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:27.856 [2024-12-07 09:52:56.365224] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:27.856 [2024-12-07 09:52:56.366230] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:27.856 [2024-12-07 09:52:56.367233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:27.856 [2024-12-07 09:52:56.368243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:27.856 [2024-12-07 09:52:56.368253] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7a52c7f000 00:18:27.856 [2024-12-07 09:52:56.369188] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:27.856 [2024-12-07 09:52:56.382327] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:27.856 [2024-12-07 09:52:56.382352] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:27.856 [2024-12-07 09:52:56.384401] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:27.856 [2024-12-07 09:52:56.384437] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:27.856 [2024-12-07 09:52:56.384509] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:27.856 [2024-12-07 09:52:56.384525] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:27.856 [2024-12-07 09:52:56.384530] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:27.856 [2024-12-07 09:52:56.385408] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:27.856 [2024-12-07 09:52:56.385417] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:27.856 [2024-12-07 09:52:56.385424] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:27.856 [2024-12-07 09:52:56.386415] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:27.856 [2024-12-07 09:52:56.386424] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:27.856 [2024-12-07 09:52:56.386430] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:27.856 [2024-12-07 09:52:56.387421] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:27.856 [2024-12-07 09:52:56.387433] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:27.856 [2024-12-07 09:52:56.388432] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:27.856 [2024-12-07 09:52:56.388440] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:27.856 [2024-12-07 09:52:56.388445] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:27.856 [2024-12-07 09:52:56.388451] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:27.856 [2024-12-07 09:52:56.388556] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:27.856 [2024-12-07 09:52:56.388560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:27.856 [2024-12-07 09:52:56.388564] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:27.856 [2024-12-07 09:52:56.392954] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:27.856 [2024-12-07 09:52:56.393466] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:27.856 [2024-12-07 09:52:56.394481] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:27.856 [2024-12-07 09:52:56.395486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:27.856 [2024-12-07 09:52:56.395526] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:27.856 [2024-12-07 09:52:56.396499] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:27.856 [2024-12-07 09:52:56.396508] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:27.856 [2024-12-07 09:52:56.396512] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:27.856 [2024-12-07 09:52:56.396529] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:27.856 [2024-12-07 09:52:56.396537] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:27.856 [2024-12-07 09:52:56.396547] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:27.856 [2024-12-07 09:52:56.396552] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:27.856 [2024-12-07 09:52:56.396556] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:27.856 [2024-12-07 09:52:56.396568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:27.856 [2024-12-07 09:52:56.403956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:27.856 [2024-12-07 09:52:56.403969] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:27.856 [2024-12-07 09:52:56.403973] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:27.856 [2024-12-07 09:52:56.403979] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:27.856 [2024-12-07 09:52:56.403984] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:27.856 [2024-12-07 09:52:56.403989] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:27.857 [2024-12-07 09:52:56.403994] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:27.857 [2024-12-07 09:52:56.403998] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.404006] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.404015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.411952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.411964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.857 [2024-12-07 09:52:56.411972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.857 [2024-12-07 09:52:56.411980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.857 [2024-12-07 09:52:56.411987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.857 [2024-12-07 09:52:56.411992] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.412001] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.412009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.419953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.419961] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:27.857 [2024-12-07 09:52:56.419966] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.419972] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.419980] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.419989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.427954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.428008] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.428015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.428024] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:27.857 [2024-12-07 09:52:56.428029] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:27.857 [2024-12-07 09:52:56.428032] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:27.857 [2024-12-07 09:52:56.428038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.435951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.435961] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:27.857 [2024-12-07 09:52:56.435973] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.435980] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.435986] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:27.857 [2024-12-07 09:52:56.435990] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:27.857 [2024-12-07 09:52:56.435993] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:27.857 [2024-12-07 09:52:56.435999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.443952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.443965] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.443972] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.443978] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:27.857 [2024-12-07 09:52:56.443982] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:27.857 [2024-12-07 09:52:56.443985] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:27.857 [2024-12-07 09:52:56.443991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.451952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.451961] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.451967] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.451976] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.451982] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.451986] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.451991] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.451996] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:27.857 [2024-12-07 09:52:56.452001] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:27.857 [2024-12-07 09:52:56.452006] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:27.857 [2024-12-07 09:52:56.452023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.459951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.459964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.467952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.467965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.475953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.475965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.483951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.483966] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:27.857 [2024-12-07 09:52:56.483970] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:27.857 [2024-12-07 09:52:56.483974] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:27.857 [2024-12-07 09:52:56.483977] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:27.857 [2024-12-07 09:52:56.483980] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:27.857 [2024-12-07 09:52:56.483985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:27.857 [2024-12-07 09:52:56.483992] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:27.857 [2024-12-07 09:52:56.483996] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:27.857 [2024-12-07 09:52:56.483999] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:27.857 [2024-12-07 09:52:56.484004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.484010] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:27.857 [2024-12-07 09:52:56.484014] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:27.857 [2024-12-07 09:52:56.484017] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:27.857 [2024-12-07 09:52:56.484022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.484029] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:27.857 [2024-12-07 09:52:56.484033] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:27.857 [2024-12-07 09:52:56.484036] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:27.857 [2024-12-07 09:52:56.484041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:27.857 [2024-12-07 09:52:56.491952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.491966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.491975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:27.857 [2024-12-07 09:52:56.491981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:27.857 ===================================================== 00:18:27.858 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:27.858 ===================================================== 00:18:27.858 Controller Capabilities/Features 00:18:27.858 ================================ 00:18:27.858 Vendor ID: 4e58 00:18:27.858 Subsystem Vendor ID: 4e58 00:18:27.858 Serial Number: SPDK2 00:18:27.858 Model Number: SPDK bdev Controller 00:18:27.858 Firmware Version: 24.09.1 00:18:27.858 Recommended Arb Burst: 6 00:18:27.858 IEEE OUI Identifier: 8d 6b 50 00:18:27.858 Multi-path I/O 00:18:27.858 May have multiple subsystem ports: Yes 00:18:27.858 May have multiple controllers: Yes 00:18:27.858 Associated with SR-IOV VF: No 00:18:27.858 Max Data Transfer Size: 131072 00:18:27.858 Max Number of Namespaces: 32 00:18:27.858 Max Number of I/O Queues: 127 00:18:27.858 NVMe Specification Version (VS): 1.3 00:18:27.858 NVMe Specification Version (Identify): 1.3 00:18:27.858 Maximum Queue Entries: 256 00:18:27.858 Contiguous Queues Required: Yes 00:18:27.858 Arbitration Mechanisms Supported 00:18:27.858 Weighted Round Robin: Not Supported 00:18:27.858 Vendor Specific: Not Supported 00:18:27.858 Reset Timeout: 15000 ms 00:18:27.858 Doorbell Stride: 4 bytes 00:18:27.858 NVM Subsystem Reset: Not Supported 00:18:27.858 Command Sets Supported 00:18:27.858 NVM Command Set: Supported 00:18:27.858 Boot Partition: Not Supported 00:18:27.858 Memory Page Size Minimum: 4096 bytes 00:18:27.858 Memory Page Size Maximum: 4096 bytes 00:18:27.858 Persistent Memory Region: Not Supported 00:18:27.858 Optional Asynchronous Events Supported 00:18:27.858 Namespace Attribute Notices: Supported 00:18:27.858 Firmware Activation Notices: Not Supported 00:18:27.858 ANA Change Notices: Not Supported 00:18:27.858 PLE Aggregate Log Change Notices: Not Supported 00:18:27.858 LBA Status Info Alert Notices: Not Supported 00:18:27.858 EGE Aggregate Log Change Notices: Not Supported 00:18:27.858 Normal NVM Subsystem Shutdown event: Not Supported 00:18:27.858 Zone Descriptor Change Notices: Not Supported 00:18:27.858 Discovery Log Change Notices: Not Supported 00:18:27.858 Controller Attributes 00:18:27.858 128-bit Host Identifier: Supported 00:18:27.858 Non-Operational Permissive Mode: Not Supported 00:18:27.858 NVM Sets: Not Supported 00:18:27.858 Read Recovery Levels: Not Supported 00:18:27.858 Endurance Groups: Not Supported 00:18:27.858 Predictable Latency Mode: Not Supported 00:18:27.858 Traffic Based Keep ALive: Not Supported 00:18:27.858 Namespace Granularity: Not Supported 00:18:27.858 SQ Associations: Not Supported 00:18:27.858 UUID List: Not Supported 00:18:27.858 Multi-Domain Subsystem: Not Supported 00:18:27.858 Fixed Capacity Management: Not Supported 00:18:27.858 Variable Capacity Management: Not Supported 00:18:27.858 Delete Endurance Group: Not Supported 00:18:27.858 Delete NVM Set: Not Supported 00:18:27.858 Extended LBA Formats Supported: Not Supported 00:18:27.858 Flexible Data Placement Supported: Not Supported 00:18:27.858 00:18:27.858 Controller Memory Buffer Support 00:18:27.858 ================================ 00:18:27.858 Supported: No 00:18:27.858 00:18:27.858 Persistent Memory Region Support 00:18:27.858 ================================ 00:18:27.858 Supported: No 00:18:27.858 00:18:27.858 Admin Command Set Attributes 00:18:27.858 ============================ 00:18:27.858 Security Send/Receive: Not Supported 00:18:27.858 Format NVM: Not Supported 00:18:27.858 Firmware Activate/Download: Not Supported 00:18:27.858 Namespace Management: Not Supported 00:18:27.858 Device Self-Test: Not Supported 00:18:27.858 Directives: Not Supported 00:18:27.858 NVMe-MI: Not Supported 00:18:27.858 Virtualization Management: Not Supported 00:18:27.858 Doorbell Buffer Config: Not Supported 00:18:27.858 Get LBA Status Capability: Not Supported 00:18:27.858 Command & Feature Lockdown Capability: Not Supported 00:18:27.858 Abort Command Limit: 4 00:18:27.858 Async Event Request Limit: 4 00:18:27.858 Number of Firmware Slots: N/A 00:18:27.858 Firmware Slot 1 Read-Only: N/A 00:18:27.858 Firmware Activation Without Reset: N/A 00:18:27.858 Multiple Update Detection Support: N/A 00:18:27.858 Firmware Update Granularity: No Information Provided 00:18:27.858 Per-Namespace SMART Log: No 00:18:27.858 Asymmetric Namespace Access Log Page: Not Supported 00:18:27.858 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:27.858 Command Effects Log Page: Supported 00:18:27.858 Get Log Page Extended Data: Supported 00:18:27.858 Telemetry Log Pages: Not Supported 00:18:27.858 Persistent Event Log Pages: Not Supported 00:18:27.858 Supported Log Pages Log Page: May Support 00:18:27.858 Commands Supported & Effects Log Page: Not Supported 00:18:27.858 Feature Identifiers & Effects Log Page:May Support 00:18:27.858 NVMe-MI Commands & Effects Log Page: May Support 00:18:27.858 Data Area 4 for Telemetry Log: Not Supported 00:18:27.858 Error Log Page Entries Supported: 128 00:18:27.858 Keep Alive: Supported 00:18:27.858 Keep Alive Granularity: 10000 ms 00:18:27.858 00:18:27.858 NVM Command Set Attributes 00:18:27.858 ========================== 00:18:27.858 Submission Queue Entry Size 00:18:27.858 Max: 64 00:18:27.858 Min: 64 00:18:27.858 Completion Queue Entry Size 00:18:27.858 Max: 16 00:18:27.858 Min: 16 00:18:27.858 Number of Namespaces: 32 00:18:27.858 Compare Command: Supported 00:18:27.858 Write Uncorrectable Command: Not Supported 00:18:27.858 Dataset Management Command: Supported 00:18:27.858 Write Zeroes Command: Supported 00:18:27.858 Set Features Save Field: Not Supported 00:18:27.858 Reservations: Not Supported 00:18:27.858 Timestamp: Not Supported 00:18:27.858 Copy: Supported 00:18:27.858 Volatile Write Cache: Present 00:18:27.858 Atomic Write Unit (Normal): 1 00:18:27.858 Atomic Write Unit (PFail): 1 00:18:27.858 Atomic Compare & Write Unit: 1 00:18:27.858 Fused Compare & Write: Supported 00:18:27.858 Scatter-Gather List 00:18:27.858 SGL Command Set: Supported (Dword aligned) 00:18:27.858 SGL Keyed: Not Supported 00:18:27.858 SGL Bit Bucket Descriptor: Not Supported 00:18:27.858 SGL Metadata Pointer: Not Supported 00:18:27.858 Oversized SGL: Not Supported 00:18:27.858 SGL Metadata Address: Not Supported 00:18:27.858 SGL Offset: Not Supported 00:18:27.858 Transport SGL Data Block: Not Supported 00:18:27.858 Replay Protected Memory Block: Not Supported 00:18:27.858 00:18:27.858 Firmware Slot Information 00:18:27.858 ========================= 00:18:27.858 Active slot: 1 00:18:27.858 Slot 1 Firmware Revision: 24.09.1 00:18:27.858 00:18:27.858 00:18:27.858 Commands Supported and Effects 00:18:27.858 ============================== 00:18:27.858 Admin Commands 00:18:27.858 -------------- 00:18:27.858 Get Log Page (02h): Supported 00:18:27.858 Identify (06h): Supported 00:18:27.858 Abort (08h): Supported 00:18:27.858 Set Features (09h): Supported 00:18:27.858 Get Features (0Ah): Supported 00:18:27.858 Asynchronous Event Request (0Ch): Supported 00:18:27.858 Keep Alive (18h): Supported 00:18:27.858 I/O Commands 00:18:27.858 ------------ 00:18:27.858 Flush (00h): Supported LBA-Change 00:18:27.858 Write (01h): Supported LBA-Change 00:18:27.858 Read (02h): Supported 00:18:27.858 Compare (05h): Supported 00:18:27.858 Write Zeroes (08h): Supported LBA-Change 00:18:27.858 Dataset Management (09h): Supported LBA-Change 00:18:27.858 Copy (19h): Supported LBA-Change 00:18:27.858 00:18:27.858 Error Log 00:18:27.858 ========= 00:18:27.858 00:18:27.858 Arbitration 00:18:27.858 =========== 00:18:27.858 Arbitration Burst: 1 00:18:27.858 00:18:27.858 Power Management 00:18:27.858 ================ 00:18:27.858 Number of Power States: 1 00:18:27.858 Current Power State: Power State #0 00:18:27.858 Power State #0: 00:18:27.858 Max Power: 0.00 W 00:18:27.858 Non-Operational State: Operational 00:18:27.858 Entry Latency: Not Reported 00:18:27.858 Exit Latency: Not Reported 00:18:27.858 Relative Read Throughput: 0 00:18:27.858 Relative Read Latency: 0 00:18:27.858 Relative Write Throughput: 0 00:18:27.858 Relative Write Latency: 0 00:18:27.858 Idle Power: Not Reported 00:18:27.858 Active Power: Not Reported 00:18:27.858 Non-Operational Permissive Mode: Not Supported 00:18:27.858 00:18:27.858 Health Information 00:18:27.858 ================== 00:18:27.858 Critical Warnings: 00:18:27.858 Available Spare Space: OK 00:18:27.858 Temperature: OK 00:18:27.858 Device Reliability: OK 00:18:27.858 Read Only: No 00:18:27.858 Volatile Memory Backup: OK 00:18:27.858 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:27.858 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:27.858 Available Spare: 0% 00:18:27.859 Availabl[2024-12-07 09:52:56.492068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:27.859 [2024-12-07 09:52:56.499952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:27.859 [2024-12-07 09:52:56.499983] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:27.859 [2024-12-07 09:52:56.499992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.859 [2024-12-07 09:52:56.499998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.859 [2024-12-07 09:52:56.500003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.859 [2024-12-07 09:52:56.500009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.859 [2024-12-07 09:52:56.500050] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:27.859 [2024-12-07 09:52:56.500060] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:27.859 [2024-12-07 09:52:56.501059] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:27.859 [2024-12-07 09:52:56.501103] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:27.859 [2024-12-07 09:52:56.501110] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:27.859 [2024-12-07 09:52:56.502074] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:27.859 [2024-12-07 09:52:56.502085] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:27.859 [2024-12-07 09:52:56.502137] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:27.859 [2024-12-07 09:52:56.503106] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:27.859 e Spare Threshold: 0% 00:18:27.859 Life Percentage Used: 0% 00:18:27.859 Data Units Read: 0 00:18:27.859 Data Units Written: 0 00:18:27.859 Host Read Commands: 0 00:18:27.859 Host Write Commands: 0 00:18:27.859 Controller Busy Time: 0 minutes 00:18:27.859 Power Cycles: 0 00:18:27.859 Power On Hours: 0 hours 00:18:27.859 Unsafe Shutdowns: 0 00:18:27.859 Unrecoverable Media Errors: 0 00:18:27.859 Lifetime Error Log Entries: 0 00:18:27.859 Warning Temperature Time: 0 minutes 00:18:27.859 Critical Temperature Time: 0 minutes 00:18:27.859 00:18:27.859 Number of Queues 00:18:27.859 ================ 00:18:27.859 Number of I/O Submission Queues: 127 00:18:27.859 Number of I/O Completion Queues: 127 00:18:27.859 00:18:27.859 Active Namespaces 00:18:27.859 ================= 00:18:27.859 Namespace ID:1 00:18:27.859 Error Recovery Timeout: Unlimited 00:18:27.859 Command Set Identifier: NVM (00h) 00:18:27.859 Deallocate: Supported 00:18:27.859 Deallocated/Unwritten Error: Not Supported 00:18:27.859 Deallocated Read Value: Unknown 00:18:27.859 Deallocate in Write Zeroes: Not Supported 00:18:27.859 Deallocated Guard Field: 0xFFFF 00:18:27.859 Flush: Supported 00:18:27.859 Reservation: Supported 00:18:27.859 Namespace Sharing Capabilities: Multiple Controllers 00:18:27.859 Size (in LBAs): 131072 (0GiB) 00:18:27.859 Capacity (in LBAs): 131072 (0GiB) 00:18:27.859 Utilization (in LBAs): 131072 (0GiB) 00:18:27.859 NGUID: E3010876024945B29D3E2D6183FF0E06 00:18:27.859 UUID: e3010876-0249-45b2-9d3e-2d6183ff0e06 00:18:27.859 Thin Provisioning: Not Supported 00:18:27.859 Per-NS Atomic Units: Yes 00:18:27.859 Atomic Boundary Size (Normal): 0 00:18:27.859 Atomic Boundary Size (PFail): 0 00:18:27.859 Atomic Boundary Offset: 0 00:18:27.859 Maximum Single Source Range Length: 65535 00:18:27.859 Maximum Copy Length: 65535 00:18:27.859 Maximum Source Range Count: 1 00:18:27.859 NGUID/EUI64 Never Reused: No 00:18:27.859 Namespace Write Protected: No 00:18:27.859 Number of LBA Formats: 1 00:18:27.859 Current LBA Format: LBA Format #00 00:18:27.859 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:27.859 00:18:27.859 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:28.116 [2024-12-07 09:52:56.717257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:33.382 Initializing NVMe Controllers 00:18:33.382 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:33.382 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:33.382 Initialization complete. Launching workers. 00:18:33.382 ======================================================== 00:18:33.382 Latency(us) 00:18:33.382 Device Information : IOPS MiB/s Average min max 00:18:33.382 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39958.40 156.09 3202.93 964.25 6616.34 00:18:33.382 ======================================================== 00:18:33.382 Total : 39958.40 156.09 3202.93 964.25 6616.34 00:18:33.382 00:18:33.382 [2024-12-07 09:53:01.822208] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:33.382 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:33.382 [2024-12-07 09:53:02.040826] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:38.638 Initializing NVMe Controllers 00:18:38.638 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:38.638 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:38.638 Initialization complete. Launching workers. 00:18:38.638 ======================================================== 00:18:38.638 Latency(us) 00:18:38.638 Device Information : IOPS MiB/s Average min max 00:18:38.638 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39980.78 156.17 3201.65 999.32 9153.34 00:18:38.638 ======================================================== 00:18:38.638 Total : 39980.78 156.17 3201.65 999.32 9153.34 00:18:38.638 00:18:38.638 [2024-12-07 09:53:07.062237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:38.638 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:38.638 [2024-12-07 09:53:07.255441] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:44.172 [2024-12-07 09:53:12.395045] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:44.172 Initializing NVMe Controllers 00:18:44.172 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:44.172 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:44.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:44.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:44.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:44.172 Initialization complete. Launching workers. 00:18:44.172 Starting thread on core 2 00:18:44.172 Starting thread on core 3 00:18:44.172 Starting thread on core 1 00:18:44.172 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:44.172 [2024-12-07 09:53:12.673566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:47.450 [2024-12-07 09:53:15.749206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:47.450 Initializing NVMe Controllers 00:18:47.450 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:47.450 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:47.450 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:47.450 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:47.450 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:47.450 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:47.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:47.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:47.450 Initialization complete. Launching workers. 00:18:47.450 Starting thread on core 1 with urgent priority queue 00:18:47.450 Starting thread on core 2 with urgent priority queue 00:18:47.450 Starting thread on core 3 with urgent priority queue 00:18:47.450 Starting thread on core 0 with urgent priority queue 00:18:47.450 SPDK bdev Controller (SPDK2 ) core 0: 2438.33 IO/s 41.01 secs/100000 ios 00:18:47.450 SPDK bdev Controller (SPDK2 ) core 1: 1497.33 IO/s 66.79 secs/100000 ios 00:18:47.450 SPDK bdev Controller (SPDK2 ) core 2: 2327.00 IO/s 42.97 secs/100000 ios 00:18:47.450 SPDK bdev Controller (SPDK2 ) core 3: 1619.33 IO/s 61.75 secs/100000 ios 00:18:47.450 ======================================================== 00:18:47.450 00:18:47.450 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:47.450 [2024-12-07 09:53:16.024435] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:47.450 Initializing NVMe Controllers 00:18:47.450 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:47.450 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:47.450 Namespace ID: 1 size: 0GB 00:18:47.450 Initialization complete. 00:18:47.450 INFO: using host memory buffer for IO 00:18:47.450 Hello world! 00:18:47.450 [2024-12-07 09:53:16.033503] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:47.450 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:47.706 [2024-12-07 09:53:16.302811] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:49.076 Initializing NVMe Controllers 00:18:49.076 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:49.076 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:49.076 Initialization complete. Launching workers. 00:18:49.076 submit (in ns) avg, min, max = 8581.2, 3280.0, 3999667.8 00:18:49.076 complete (in ns) avg, min, max = 19945.1, 1845.2, 3998473.0 00:18:49.076 00:18:49.076 Submit histogram 00:18:49.076 ================ 00:18:49.076 Range in us Cumulative Count 00:18:49.076 3.270 - 3.283: 0.0061% ( 1) 00:18:49.076 3.283 - 3.297: 0.2319% ( 37) 00:18:49.076 3.297 - 3.311: 2.1358% ( 312) 00:18:49.076 3.311 - 3.325: 6.2550% ( 675) 00:18:49.076 3.325 - 3.339: 11.8020% ( 909) 00:18:49.076 3.339 - 3.353: 18.0448% ( 1023) 00:18:49.076 3.353 - 3.367: 24.5072% ( 1059) 00:18:49.076 3.367 - 3.381: 29.8102% ( 869) 00:18:49.076 3.381 - 3.395: 34.8081% ( 819) 00:18:49.076 3.395 - 3.409: 40.4040% ( 917) 00:18:49.076 3.409 - 3.423: 45.1761% ( 782) 00:18:49.076 3.423 - 3.437: 48.9229% ( 614) 00:18:49.076 3.437 - 3.450: 53.0054% ( 669) 00:18:49.076 3.450 - 3.464: 59.6143% ( 1083) 00:18:49.076 3.464 - 3.478: 66.1805% ( 1076) 00:18:49.076 3.478 - 3.492: 70.8244% ( 761) 00:18:49.076 3.492 - 3.506: 76.1823% ( 878) 00:18:49.076 3.506 - 3.520: 80.6066% ( 725) 00:18:49.076 3.520 - 3.534: 83.9629% ( 550) 00:18:49.076 3.534 - 3.548: 86.2635% ( 377) 00:18:49.076 3.548 - 3.562: 87.5206% ( 206) 00:18:49.076 3.562 - 3.590: 88.3688% ( 139) 00:18:49.076 3.590 - 3.617: 89.2598% ( 146) 00:18:49.076 3.617 - 3.645: 90.7976% ( 252) 00:18:49.076 3.645 - 3.673: 92.3903% ( 261) 00:18:49.077 3.673 - 3.701: 93.9037% ( 248) 00:18:49.077 3.701 - 3.729: 95.6551% ( 287) 00:18:49.077 3.729 - 3.757: 97.0892% ( 235) 00:18:49.077 3.757 - 3.784: 98.2669% ( 193) 00:18:49.077 3.784 - 3.812: 98.8527% ( 96) 00:18:49.077 3.812 - 3.840: 99.2311% ( 62) 00:18:49.077 3.840 - 3.868: 99.4508% ( 36) 00:18:49.077 3.868 - 3.896: 99.5850% ( 22) 00:18:49.077 3.896 - 3.923: 99.6094% ( 4) 00:18:49.077 3.923 - 3.951: 99.6278% ( 3) 00:18:49.077 4.035 - 4.063: 99.6339% ( 1) 00:18:49.077 5.148 - 5.176: 99.6522% ( 3) 00:18:49.077 5.176 - 5.203: 99.6583% ( 1) 00:18:49.077 5.398 - 5.426: 99.6644% ( 1) 00:18:49.077 5.426 - 5.454: 99.6705% ( 1) 00:18:49.077 5.454 - 5.482: 99.6766% ( 1) 00:18:49.077 5.537 - 5.565: 99.6827% ( 1) 00:18:49.077 5.649 - 5.677: 99.6888% ( 1) 00:18:49.077 5.704 - 5.732: 99.7010% ( 2) 00:18:49.077 5.732 - 5.760: 99.7071% ( 1) 00:18:49.077 5.788 - 5.816: 99.7132% ( 1) 00:18:49.077 5.816 - 5.843: 99.7193% ( 1) 00:18:49.077 5.955 - 5.983: 99.7315% ( 2) 00:18:49.077 6.177 - 6.205: 99.7376% ( 1) 00:18:49.077 6.317 - 6.344: 99.7437% ( 1) 00:18:49.077 6.372 - 6.400: 99.7498% ( 1) 00:18:49.077 6.623 - 6.650: 99.7620% ( 2) 00:18:49.077 6.650 - 6.678: 99.7681% ( 1) 00:18:49.077 6.706 - 6.734: 99.7742% ( 1) 00:18:49.077 6.762 - 6.790: 99.7803% ( 1) 00:18:49.077 6.873 - 6.901: 99.7864% ( 1) 00:18:49.077 7.012 - 7.040: 99.7925% ( 1) 00:18:49.077 7.068 - 7.096: 99.7986% ( 1) 00:18:49.077 7.123 - 7.179: 99.8108% ( 2) 00:18:49.077 7.402 - 7.457: 99.8169% ( 1) 00:18:49.077 7.457 - 7.513: 99.8230% ( 1) 00:18:49.077 7.513 - 7.569: 99.8291% ( 1) 00:18:49.077 7.791 - 7.847: 99.8413% ( 2) 00:18:49.077 7.847 - 7.903: 99.8474% ( 1) 00:18:49.077 7.903 - 7.958: 99.8535% ( 1) 00:18:49.077 8.181 - 8.237: 99.8596% ( 1) 00:18:49.077 8.237 - 8.292: 99.8657% ( 1) 00:18:49.077 8.403 - 8.459: 99.8718% ( 1) 00:18:49.077 3989.148 - 4017.642: 100.0000% ( 21) 00:18:49.077 00:18:49.077 Complete histogram 00:18:49.077 ================== 00:18:49.077 Range in us Cumulative Count 00:18:49.077 1.837 - 1.850: 0.0366% ( 6) 00:18:49.077 1.850 - 1.864: 0.4638% ( 70) 00:18:49.077 1.864 - 1.878: 1.4951% ( 169) 00:18:49.077 1.878 - 1.892: 7.7134% ( 1019) 00:18:49.077 1.892 - 1.906: 44.3522% ( 6004) 00:18:49.077 1.906 - 1.920: 84.0605% ( 6507) 00:18:49.077 1.920 - 1.934: 96.1311% ( 1978) 00:18:49.077 1.934 - 1.948: 98.7673% ( 432) 00:18:49.077 1.948 - [2024-12-07 09:53:17.396001] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:49.077 1.962: 99.2494% ( 79) 00:18:49.077 1.962 - 1.976: 99.3470% ( 16) 00:18:49.077 1.976 - 1.990: 99.3898% ( 7) 00:18:49.077 1.990 - 2.003: 99.3959% ( 1) 00:18:49.077 2.003 - 2.017: 99.4142% ( 3) 00:18:49.077 2.017 - 2.031: 99.4203% ( 1) 00:18:49.077 2.059 - 2.073: 99.4264% ( 1) 00:18:49.077 2.087 - 2.101: 99.4325% ( 1) 00:18:49.077 3.492 - 3.506: 99.4386% ( 1) 00:18:49.077 3.590 - 3.617: 99.4447% ( 1) 00:18:49.077 4.035 - 4.063: 99.4508% ( 1) 00:18:49.077 4.063 - 4.090: 99.4569% ( 1) 00:18:49.077 4.452 - 4.480: 99.4630% ( 1) 00:18:49.077 4.591 - 4.619: 99.4691% ( 1) 00:18:49.077 4.786 - 4.814: 99.4752% ( 1) 00:18:49.077 4.842 - 4.870: 99.4813% ( 1) 00:18:49.077 4.953 - 4.981: 99.4874% ( 1) 00:18:49.077 5.426 - 5.454: 99.4935% ( 1) 00:18:49.077 5.732 - 5.760: 99.4996% ( 1) 00:18:49.077 6.177 - 6.205: 99.5057% ( 1) 00:18:49.077 6.317 - 6.344: 99.5118% ( 1) 00:18:49.077 6.623 - 6.650: 99.5179% ( 1) 00:18:49.077 7.068 - 7.096: 99.5240% ( 1) 00:18:49.077 7.123 - 7.179: 99.5301% ( 1) 00:18:49.077 12.911 - 12.967: 99.5362% ( 1) 00:18:49.077 42.741 - 42.963: 99.5423% ( 1) 00:18:49.077 48.751 - 48.974: 99.5484% ( 1) 00:18:49.077 3846.678 - 3875.172: 99.5545% ( 1) 00:18:49.077 3989.148 - 4017.642: 100.0000% ( 73) 00:18:49.077 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:49.077 [ 00:18:49.077 { 00:18:49.077 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:49.077 "subtype": "Discovery", 00:18:49.077 "listen_addresses": [], 00:18:49.077 "allow_any_host": true, 00:18:49.077 "hosts": [] 00:18:49.077 }, 00:18:49.077 { 00:18:49.077 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:49.077 "subtype": "NVMe", 00:18:49.077 "listen_addresses": [ 00:18:49.077 { 00:18:49.077 "trtype": "VFIOUSER", 00:18:49.077 "adrfam": "IPv4", 00:18:49.077 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:49.077 "trsvcid": "0" 00:18:49.077 } 00:18:49.077 ], 00:18:49.077 "allow_any_host": true, 00:18:49.077 "hosts": [], 00:18:49.077 "serial_number": "SPDK1", 00:18:49.077 "model_number": "SPDK bdev Controller", 00:18:49.077 "max_namespaces": 32, 00:18:49.077 "min_cntlid": 1, 00:18:49.077 "max_cntlid": 65519, 00:18:49.077 "namespaces": [ 00:18:49.077 { 00:18:49.077 "nsid": 1, 00:18:49.077 "bdev_name": "Malloc1", 00:18:49.077 "name": "Malloc1", 00:18:49.077 "nguid": "991F02B7ED1547798721D32E779A8EE5", 00:18:49.077 "uuid": "991f02b7-ed15-4779-8721-d32e779a8ee5" 00:18:49.077 }, 00:18:49.077 { 00:18:49.077 "nsid": 2, 00:18:49.077 "bdev_name": "Malloc3", 00:18:49.077 "name": "Malloc3", 00:18:49.077 "nguid": "60821A61AC78466282B3B228F0F6D478", 00:18:49.077 "uuid": "60821a61-ac78-4662-82b3-b228f0f6d478" 00:18:49.077 } 00:18:49.077 ] 00:18:49.077 }, 00:18:49.077 { 00:18:49.077 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:49.077 "subtype": "NVMe", 00:18:49.077 "listen_addresses": [ 00:18:49.077 { 00:18:49.077 "trtype": "VFIOUSER", 00:18:49.077 "adrfam": "IPv4", 00:18:49.077 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:49.077 "trsvcid": "0" 00:18:49.077 } 00:18:49.077 ], 00:18:49.077 "allow_any_host": true, 00:18:49.077 "hosts": [], 00:18:49.077 "serial_number": "SPDK2", 00:18:49.077 "model_number": "SPDK bdev Controller", 00:18:49.077 "max_namespaces": 32, 00:18:49.077 "min_cntlid": 1, 00:18:49.077 "max_cntlid": 65519, 00:18:49.077 "namespaces": [ 00:18:49.077 { 00:18:49.077 "nsid": 1, 00:18:49.077 "bdev_name": "Malloc2", 00:18:49.077 "name": "Malloc2", 00:18:49.077 "nguid": "E3010876024945B29D3E2D6183FF0E06", 00:18:49.077 "uuid": "e3010876-0249-45b2-9d3e-2d6183ff0e06" 00:18:49.077 } 00:18:49.077 ] 00:18:49.077 } 00:18:49.077 ] 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1238262 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:49.077 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:49.077 [2024-12-07 09:53:17.773932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:49.336 Malloc4 00:18:49.336 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:49.336 [2024-12-07 09:53:18.033845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:49.336 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:49.594 Asynchronous Event Request test 00:18:49.594 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:49.594 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:49.594 Registering asynchronous event callbacks... 00:18:49.594 Starting namespace attribute notice tests for all controllers... 00:18:49.594 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:49.594 aer_cb - Changed Namespace 00:18:49.594 Cleaning up... 00:18:49.594 [ 00:18:49.594 { 00:18:49.594 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:49.594 "subtype": "Discovery", 00:18:49.594 "listen_addresses": [], 00:18:49.594 "allow_any_host": true, 00:18:49.594 "hosts": [] 00:18:49.594 }, 00:18:49.594 { 00:18:49.594 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:49.594 "subtype": "NVMe", 00:18:49.594 "listen_addresses": [ 00:18:49.594 { 00:18:49.594 "trtype": "VFIOUSER", 00:18:49.594 "adrfam": "IPv4", 00:18:49.594 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:49.594 "trsvcid": "0" 00:18:49.594 } 00:18:49.594 ], 00:18:49.594 "allow_any_host": true, 00:18:49.594 "hosts": [], 00:18:49.594 "serial_number": "SPDK1", 00:18:49.594 "model_number": "SPDK bdev Controller", 00:18:49.594 "max_namespaces": 32, 00:18:49.594 "min_cntlid": 1, 00:18:49.594 "max_cntlid": 65519, 00:18:49.594 "namespaces": [ 00:18:49.594 { 00:18:49.594 "nsid": 1, 00:18:49.594 "bdev_name": "Malloc1", 00:18:49.594 "name": "Malloc1", 00:18:49.594 "nguid": "991F02B7ED1547798721D32E779A8EE5", 00:18:49.594 "uuid": "991f02b7-ed15-4779-8721-d32e779a8ee5" 00:18:49.594 }, 00:18:49.594 { 00:18:49.594 "nsid": 2, 00:18:49.594 "bdev_name": "Malloc3", 00:18:49.594 "name": "Malloc3", 00:18:49.595 "nguid": "60821A61AC78466282B3B228F0F6D478", 00:18:49.595 "uuid": "60821a61-ac78-4662-82b3-b228f0f6d478" 00:18:49.595 } 00:18:49.595 ] 00:18:49.595 }, 00:18:49.595 { 00:18:49.595 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:49.595 "subtype": "NVMe", 00:18:49.595 "listen_addresses": [ 00:18:49.595 { 00:18:49.595 "trtype": "VFIOUSER", 00:18:49.595 "adrfam": "IPv4", 00:18:49.595 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:49.595 "trsvcid": "0" 00:18:49.595 } 00:18:49.595 ], 00:18:49.595 "allow_any_host": true, 00:18:49.595 "hosts": [], 00:18:49.595 "serial_number": "SPDK2", 00:18:49.595 "model_number": "SPDK bdev Controller", 00:18:49.595 "max_namespaces": 32, 00:18:49.595 "min_cntlid": 1, 00:18:49.595 "max_cntlid": 65519, 00:18:49.595 "namespaces": [ 00:18:49.595 { 00:18:49.595 "nsid": 1, 00:18:49.595 "bdev_name": "Malloc2", 00:18:49.595 "name": "Malloc2", 00:18:49.595 "nguid": "E3010876024945B29D3E2D6183FF0E06", 00:18:49.595 "uuid": "e3010876-0249-45b2-9d3e-2d6183ff0e06" 00:18:49.595 }, 00:18:49.595 { 00:18:49.595 "nsid": 2, 00:18:49.595 "bdev_name": "Malloc4", 00:18:49.595 "name": "Malloc4", 00:18:49.595 "nguid": "977FB01C5F77451D831CC8B1B84D5FCC", 00:18:49.595 "uuid": "977fb01c-5f77-451d-831c-c8b1b84d5fcc" 00:18:49.595 } 00:18:49.595 ] 00:18:49.595 } 00:18:49.595 ] 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1238262 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1230636 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1230636 ']' 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1230636 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1230636 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1230636' 00:18:49.595 killing process with pid 1230636 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1230636 00:18:49.595 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1230636 00:18:49.853 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:49.853 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:49.853 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:49.853 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:49.853 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:49.853 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1238413 00:18:49.853 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1238413' 00:18:49.853 Process pid: 1238413 00:18:49.853 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:49.853 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:49.853 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1238413 00:18:50.118 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1238413 ']' 00:18:50.118 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.118 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:50.118 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.118 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:50.118 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:50.118 [2024-12-07 09:53:18.620484] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:50.118 [2024-12-07 09:53:18.621405] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:50.118 [2024-12-07 09:53:18.621445] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.118 [2024-12-07 09:53:18.680557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:50.118 [2024-12-07 09:53:18.722920] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.118 [2024-12-07 09:53:18.722966] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.118 [2024-12-07 09:53:18.722973] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.118 [2024-12-07 09:53:18.722979] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.118 [2024-12-07 09:53:18.722985] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.118 [2024-12-07 09:53:18.723034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.118 [2024-12-07 09:53:18.723062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.118 [2024-12-07 09:53:18.723147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:50.118 [2024-12-07 09:53:18.723149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.118 [2024-12-07 09:53:18.794604] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:50.118 [2024-12-07 09:53:18.794683] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:50.118 [2024-12-07 09:53:18.794918] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:50.118 [2024-12-07 09:53:18.795221] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:50.118 [2024-12-07 09:53:18.795468] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:50.118 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.118 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:50.118 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:51.497 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:51.497 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:51.497 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:51.497 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:51.497 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:51.497 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:51.497 Malloc1 00:18:51.756 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:51.756 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:52.014 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:52.270 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:52.270 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:52.270 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:52.527 Malloc2 00:18:52.527 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:52.527 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:52.783 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:53.041 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:53.041 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1238413 00:18:53.041 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1238413 ']' 00:18:53.041 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1238413 00:18:53.041 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:53.041 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:53.041 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1238413 00:18:53.041 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:53.041 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:53.041 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1238413' 00:18:53.041 killing process with pid 1238413 00:18:53.041 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1238413 00:18:53.041 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1238413 00:18:53.315 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:53.315 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:53.315 00:18:53.315 real 0m50.602s 00:18:53.315 user 3m15.692s 00:18:53.315 sys 0m3.243s 00:18:53.315 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:53.315 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:53.315 ************************************ 00:18:53.315 END TEST nvmf_vfio_user 00:18:53.315 ************************************ 00:18:53.315 09:53:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:53.315 09:53:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:53.315 09:53:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:53.315 09:53:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:53.315 ************************************ 00:18:53.315 START TEST nvmf_vfio_user_nvme_compliance 00:18:53.315 ************************************ 00:18:53.315 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:53.574 * Looking for test storage... 00:18:53.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:53.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.574 --rc genhtml_branch_coverage=1 00:18:53.574 --rc genhtml_function_coverage=1 00:18:53.574 --rc genhtml_legend=1 00:18:53.574 --rc geninfo_all_blocks=1 00:18:53.574 --rc geninfo_unexecuted_blocks=1 00:18:53.574 00:18:53.574 ' 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:53.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.574 --rc genhtml_branch_coverage=1 00:18:53.574 --rc genhtml_function_coverage=1 00:18:53.574 --rc genhtml_legend=1 00:18:53.574 --rc geninfo_all_blocks=1 00:18:53.574 --rc geninfo_unexecuted_blocks=1 00:18:53.574 00:18:53.574 ' 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:53.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.574 --rc genhtml_branch_coverage=1 00:18:53.574 --rc genhtml_function_coverage=1 00:18:53.574 --rc genhtml_legend=1 00:18:53.574 --rc geninfo_all_blocks=1 00:18:53.574 --rc geninfo_unexecuted_blocks=1 00:18:53.574 00:18:53.574 ' 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:53.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.574 --rc genhtml_branch_coverage=1 00:18:53.574 --rc genhtml_function_coverage=1 00:18:53.574 --rc genhtml_legend=1 00:18:53.574 --rc geninfo_all_blocks=1 00:18:53.574 --rc geninfo_unexecuted_blocks=1 00:18:53.574 00:18:53.574 ' 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.574 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:53.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1239044 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1239044' 00:18:53.575 Process pid: 1239044 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1239044 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1239044 ']' 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:53.575 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:53.575 [2024-12-07 09:53:22.244778] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:53.575 [2024-12-07 09:53:22.244826] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.834 [2024-12-07 09:53:22.299454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:53.834 [2024-12-07 09:53:22.340908] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.834 [2024-12-07 09:53:22.340950] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.834 [2024-12-07 09:53:22.340958] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.834 [2024-12-07 09:53:22.340964] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.834 [2024-12-07 09:53:22.340970] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.834 [2024-12-07 09:53:22.341015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.834 [2024-12-07 09:53:22.341037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.834 [2024-12-07 09:53:22.341039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.834 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.834 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:18:53.834 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:54.768 malloc0 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.768 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:55.026 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.026 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:55.026 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.026 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:55.026 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.026 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:55.026 00:18:55.026 00:18:55.026 CUnit - A unit testing framework for C - Version 2.1-3 00:18:55.026 http://cunit.sourceforge.net/ 00:18:55.026 00:18:55.026 00:18:55.026 Suite: nvme_compliance 00:18:55.026 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-07 09:53:23.656391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.026 [2024-12-07 09:53:23.657736] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:55.026 [2024-12-07 09:53:23.657752] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:55.026 [2024-12-07 09:53:23.657759] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:55.026 [2024-12-07 09:53:23.659410] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.026 passed 00:18:55.026 Test: admin_identify_ctrlr_verify_fused ...[2024-12-07 09:53:23.740954] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.026 [2024-12-07 09:53:23.743974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.285 passed 00:18:55.285 Test: admin_identify_ns ...[2024-12-07 09:53:23.821240] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.285 [2024-12-07 09:53:23.884969] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:55.285 [2024-12-07 09:53:23.892960] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:55.285 [2024-12-07 09:53:23.914056] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.285 passed 00:18:55.285 Test: admin_get_features_mandatory_features ...[2024-12-07 09:53:23.987977] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.285 [2024-12-07 09:53:23.990990] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.543 passed 00:18:55.543 Test: admin_get_features_optional_features ...[2024-12-07 09:53:24.069480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.543 [2024-12-07 09:53:24.072496] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.543 passed 00:18:55.543 Test: admin_set_features_number_of_queues ...[2024-12-07 09:53:24.152270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.543 [2024-12-07 09:53:24.265065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.801 passed 00:18:55.801 Test: admin_get_log_page_mandatory_logs ...[2024-12-07 09:53:24.337038] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.801 [2024-12-07 09:53:24.342072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.801 passed 00:18:55.801 Test: admin_get_log_page_with_lpo ...[2024-12-07 09:53:24.417218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.801 [2024-12-07 09:53:24.484966] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:55.801 [2024-12-07 09:53:24.498021] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.060 passed 00:18:56.060 Test: fabric_property_get ...[2024-12-07 09:53:24.575934] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.060 [2024-12-07 09:53:24.577188] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:56.060 [2024-12-07 09:53:24.578964] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.060 passed 00:18:56.060 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-07 09:53:24.656448] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.060 [2024-12-07 09:53:24.657684] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:56.060 [2024-12-07 09:53:24.659477] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.060 passed 00:18:56.060 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-07 09:53:24.738272] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.318 [2024-12-07 09:53:24.822956] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:56.318 [2024-12-07 09:53:24.838958] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:56.318 [2024-12-07 09:53:24.844033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.318 passed 00:18:56.318 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-07 09:53:24.919899] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.318 [2024-12-07 09:53:24.921134] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:56.318 [2024-12-07 09:53:24.922920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.318 passed 00:18:56.318 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-07 09:53:24.996605] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.579 [2024-12-07 09:53:25.071959] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:56.579 [2024-12-07 09:53:25.095954] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:56.579 [2024-12-07 09:53:25.101032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.579 passed 00:18:56.579 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-07 09:53:25.176735] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.579 [2024-12-07 09:53:25.177976] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:56.579 [2024-12-07 09:53:25.177998] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:56.579 [2024-12-07 09:53:25.182782] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.579 passed 00:18:56.579 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-07 09:53:25.258480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.837 [2024-12-07 09:53:25.349963] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:56.837 [2024-12-07 09:53:25.357964] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:56.837 [2024-12-07 09:53:25.365957] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:56.837 [2024-12-07 09:53:25.373957] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:56.837 [2024-12-07 09:53:25.403037] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.837 passed 00:18:56.837 Test: admin_create_io_sq_verify_pc ...[2024-12-07 09:53:25.480756] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.837 [2024-12-07 09:53:25.496960] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:56.837 [2024-12-07 09:53:25.514941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.837 passed 00:18:57.095 Test: admin_create_io_qp_max_qps ...[2024-12-07 09:53:25.593495] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.029 [2024-12-07 09:53:26.688960] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:58.594 [2024-12-07 09:53:27.072997] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:58.594 passed 00:18:58.594 Test: admin_create_io_sq_shared_cq ...[2024-12-07 09:53:27.152230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.594 [2024-12-07 09:53:27.281954] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:58.853 [2024-12-07 09:53:27.321017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:58.853 passed 00:18:58.853 00:18:58.853 Run Summary: Type Total Ran Passed Failed Inactive 00:18:58.853 suites 1 1 n/a 0 0 00:18:58.853 tests 18 18 18 0 0 00:18:58.853 asserts 360 360 360 0 n/a 00:18:58.853 00:18:58.853 Elapsed time = 1.505 seconds 00:18:58.853 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1239044 00:18:58.853 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1239044 ']' 00:18:58.853 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1239044 00:18:58.853 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:18:58.853 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:58.853 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1239044 00:18:58.853 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:58.853 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:58.853 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1239044' 00:18:58.853 killing process with pid 1239044 00:18:58.853 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1239044 00:18:58.853 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1239044 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:59.181 00:18:59.181 real 0m5.609s 00:18:59.181 user 0m15.684s 00:18:59.181 sys 0m0.529s 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:59.181 ************************************ 00:18:59.181 END TEST nvmf_vfio_user_nvme_compliance 00:18:59.181 ************************************ 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:59.181 ************************************ 00:18:59.181 START TEST nvmf_vfio_user_fuzz 00:18:59.181 ************************************ 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:59.181 * Looking for test storage... 00:18:59.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.181 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:59.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.182 --rc genhtml_branch_coverage=1 00:18:59.182 --rc genhtml_function_coverage=1 00:18:59.182 --rc genhtml_legend=1 00:18:59.182 --rc geninfo_all_blocks=1 00:18:59.182 --rc geninfo_unexecuted_blocks=1 00:18:59.182 00:18:59.182 ' 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:59.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.182 --rc genhtml_branch_coverage=1 00:18:59.182 --rc genhtml_function_coverage=1 00:18:59.182 --rc genhtml_legend=1 00:18:59.182 --rc geninfo_all_blocks=1 00:18:59.182 --rc geninfo_unexecuted_blocks=1 00:18:59.182 00:18:59.182 ' 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:59.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.182 --rc genhtml_branch_coverage=1 00:18:59.182 --rc genhtml_function_coverage=1 00:18:59.182 --rc genhtml_legend=1 00:18:59.182 --rc geninfo_all_blocks=1 00:18:59.182 --rc geninfo_unexecuted_blocks=1 00:18:59.182 00:18:59.182 ' 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:59.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.182 --rc genhtml_branch_coverage=1 00:18:59.182 --rc genhtml_function_coverage=1 00:18:59.182 --rc genhtml_legend=1 00:18:59.182 --rc geninfo_all_blocks=1 00:18:59.182 --rc geninfo_unexecuted_blocks=1 00:18:59.182 00:18:59.182 ' 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1240026 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1240026' 00:18:59.182 Process pid: 1240026 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1240026 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1240026 ']' 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.182 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:59.183 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:59.440 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:59.440 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:18:59.440 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:00.815 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:00.815 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.815 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:00.815 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.815 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:00.815 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:00.815 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.815 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:00.815 malloc0 00:19:00.815 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.815 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:00.815 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.816 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:00.816 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.816 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:00.816 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.816 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:00.816 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.816 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:00.816 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.816 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:00.816 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.816 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:00.816 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:32.895 Fuzzing completed. Shutting down the fuzz application 00:19:32.895 00:19:32.895 Dumping successful admin opcodes: 00:19:32.895 8, 9, 10, 24, 00:19:32.895 Dumping successful io opcodes: 00:19:32.895 0, 00:19:32.895 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1123415, total successful commands: 4423, random_seed: 1417986560 00:19:32.895 NS: 0x200003a1ef00 admin qp, Total commands completed: 278913, total successful commands: 2248, random_seed: 3203204992 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1240026 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1240026 ']' 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1240026 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1240026 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1240026' 00:19:32.895 killing process with pid 1240026 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1240026 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1240026 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:32.895 00:19:32.895 real 0m32.171s 00:19:32.895 user 0m33.399s 00:19:32.895 sys 0m27.790s 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:32.895 ************************************ 00:19:32.895 END TEST nvmf_vfio_user_fuzz 00:19:32.895 ************************************ 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.895 ************************************ 00:19:32.895 START TEST nvmf_auth_target 00:19:32.895 ************************************ 00:19:32.895 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:32.895 * Looking for test storage... 00:19:32.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:32.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.895 --rc genhtml_branch_coverage=1 00:19:32.895 --rc genhtml_function_coverage=1 00:19:32.895 --rc genhtml_legend=1 00:19:32.895 --rc geninfo_all_blocks=1 00:19:32.895 --rc geninfo_unexecuted_blocks=1 00:19:32.895 00:19:32.895 ' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:32.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.895 --rc genhtml_branch_coverage=1 00:19:32.895 --rc genhtml_function_coverage=1 00:19:32.895 --rc genhtml_legend=1 00:19:32.895 --rc geninfo_all_blocks=1 00:19:32.895 --rc geninfo_unexecuted_blocks=1 00:19:32.895 00:19:32.895 ' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:32.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.895 --rc genhtml_branch_coverage=1 00:19:32.895 --rc genhtml_function_coverage=1 00:19:32.895 --rc genhtml_legend=1 00:19:32.895 --rc geninfo_all_blocks=1 00:19:32.895 --rc geninfo_unexecuted_blocks=1 00:19:32.895 00:19:32.895 ' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:32.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.895 --rc genhtml_branch_coverage=1 00:19:32.895 --rc genhtml_function_coverage=1 00:19:32.895 --rc genhtml_legend=1 00:19:32.895 --rc geninfo_all_blocks=1 00:19:32.895 --rc geninfo_unexecuted_blocks=1 00:19:32.895 00:19:32.895 ' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:32.895 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.079 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:37.080 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:37.080 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:37.080 Found net devices under 0000:86:00.0: cvl_0_0 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:37.080 Found net devices under 0000:86:00.1: cvl_0_1 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:37.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:19:37.080 00:19:37.080 --- 10.0.0.2 ping statistics --- 00:19:37.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.080 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:19:37.080 00:19:37.080 --- 10.0.0.1 ping statistics --- 00:19:37.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.080 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=1248443 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 1248443 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1248443 ']' 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:37.080 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1248472 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:37.339 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=6a1107e2edc388206bb647b9ec95cc4c61d3afa2f7c1ee29 00:19:37.597 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:19:37.597 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.B43 00:19:37.597 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 6a1107e2edc388206bb647b9ec95cc4c61d3afa2f7c1ee29 0 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 6a1107e2edc388206bb647b9ec95cc4c61d3afa2f7c1ee29 0 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=6a1107e2edc388206bb647b9ec95cc4c61d3afa2f7c1ee29 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.B43 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.B43 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.B43 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=1940a7fa468318ce507085a7e2d01a0152e6ce921e85dec221c8a1335c3fdda8 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.0P1 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 1940a7fa468318ce507085a7e2d01a0152e6ce921e85dec221c8a1335c3fdda8 3 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 1940a7fa468318ce507085a7e2d01a0152e6ce921e85dec221c8a1335c3fdda8 3 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=1940a7fa468318ce507085a7e2d01a0152e6ce921e85dec221c8a1335c3fdda8 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.0P1 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.0P1 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.0P1 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=434c5093dfe47aaa0195a194f8037047 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.ImE 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 434c5093dfe47aaa0195a194f8037047 1 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 434c5093dfe47aaa0195a194f8037047 1 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=434c5093dfe47aaa0195a194f8037047 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.ImE 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.ImE 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.ImE 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=299d694b55b96eca98e745c119fe933c4a509f507d2509af 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.PjO 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 299d694b55b96eca98e745c119fe933c4a509f507d2509af 2 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 299d694b55b96eca98e745c119fe933c4a509f507d2509af 2 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=299d694b55b96eca98e745c119fe933c4a509f507d2509af 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.PjO 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.PjO 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.PjO 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=7bec52ed4d2b1ce46a3d3dddbcc477d248450bd9c92636ba 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.xlN 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 7bec52ed4d2b1ce46a3d3dddbcc477d248450bd9c92636ba 2 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 7bec52ed4d2b1ce46a3d3dddbcc477d248450bd9c92636ba 2 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=7bec52ed4d2b1ce46a3d3dddbcc477d248450bd9c92636ba 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:19:37.598 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.xlN 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.xlN 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.xlN 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=29d24235bd4c04e40491bc0e39c3c5fc 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.sbU 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 29d24235bd4c04e40491bc0e39c3c5fc 1 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 29d24235bd4c04e40491bc0e39c3c5fc 1 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=29d24235bd4c04e40491bc0e39c3c5fc 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.sbU 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.sbU 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.sbU 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=87794ab3ef6850d25f098bef2c2ba9332e177db4ab81d350746e0c413f8383b1 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.EPQ 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 87794ab3ef6850d25f098bef2c2ba9332e177db4ab81d350746e0c413f8383b1 3 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 87794ab3ef6850d25f098bef2c2ba9332e177db4ab81d350746e0c413f8383b1 3 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=87794ab3ef6850d25f098bef2c2ba9332e177db4ab81d350746e0c413f8383b1 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.EPQ 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.EPQ 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.EPQ 00:19:37.857 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:37.858 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1248443 00:19:37.858 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1248443 ']' 00:19:37.858 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.858 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:37.858 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.858 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:37.858 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.116 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.116 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:38.116 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1248472 /var/tmp/host.sock 00:19:38.116 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1248472 ']' 00:19:38.116 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:38.116 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.116 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:38.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:38.116 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.116 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.B43 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.B43 00:19:38.375 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.B43 00:19:38.375 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.0P1 ]] 00:19:38.375 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0P1 00:19:38.375 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.375 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.634 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.634 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0P1 00:19:38.634 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0P1 00:19:38.634 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:38.634 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ImE 00:19:38.634 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.634 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.634 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.634 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ImE 00:19:38.634 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ImE 00:19:38.893 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.PjO ]] 00:19:38.893 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PjO 00:19:38.893 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.893 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.893 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.893 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PjO 00:19:38.893 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PjO 00:19:39.152 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:39.152 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xlN 00:19:39.152 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.152 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.152 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.152 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xlN 00:19:39.152 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xlN 00:19:39.411 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.sbU ]] 00:19:39.411 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sbU 00:19:39.411 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.411 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.411 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.411 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sbU 00:19:39.411 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sbU 00:19:39.411 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:39.411 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EPQ 00:19:39.411 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.411 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.411 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.411 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.EPQ 00:19:39.411 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.EPQ 00:19:39.670 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:39.670 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:39.670 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.670 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.670 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.670 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.928 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:39.928 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.928 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.928 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:39.928 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:39.929 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.929 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.929 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.929 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.929 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.929 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.929 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.929 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.188 00:19:40.188 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.188 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.188 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.446 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.446 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.446 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.446 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.446 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.446 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.446 { 00:19:40.446 "cntlid": 1, 00:19:40.446 "qid": 0, 00:19:40.446 "state": "enabled", 00:19:40.446 "thread": "nvmf_tgt_poll_group_000", 00:19:40.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:40.446 "listen_address": { 00:19:40.446 "trtype": "TCP", 00:19:40.446 "adrfam": "IPv4", 00:19:40.446 "traddr": "10.0.0.2", 00:19:40.446 "trsvcid": "4420" 00:19:40.446 }, 00:19:40.446 "peer_address": { 00:19:40.446 "trtype": "TCP", 00:19:40.446 "adrfam": "IPv4", 00:19:40.446 "traddr": "10.0.0.1", 00:19:40.446 "trsvcid": "41138" 00:19:40.446 }, 00:19:40.446 "auth": { 00:19:40.446 "state": "completed", 00:19:40.446 "digest": "sha256", 00:19:40.446 "dhgroup": "null" 00:19:40.446 } 00:19:40.446 } 00:19:40.446 ]' 00:19:40.446 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.446 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.446 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.446 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:40.446 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.446 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.446 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.446 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.704 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:19:40.704 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:19:41.269 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.269 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.269 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.269 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.269 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.269 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.269 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.269 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.527 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:41.527 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.527 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.527 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:41.527 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:41.527 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.527 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.527 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.527 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.527 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.527 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.528 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.528 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.786 00:19:41.786 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.786 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.786 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.044 { 00:19:42.044 "cntlid": 3, 00:19:42.044 "qid": 0, 00:19:42.044 "state": "enabled", 00:19:42.044 "thread": "nvmf_tgt_poll_group_000", 00:19:42.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:42.044 "listen_address": { 00:19:42.044 "trtype": "TCP", 00:19:42.044 "adrfam": "IPv4", 00:19:42.044 "traddr": "10.0.0.2", 00:19:42.044 "trsvcid": "4420" 00:19:42.044 }, 00:19:42.044 "peer_address": { 00:19:42.044 "trtype": "TCP", 00:19:42.044 "adrfam": "IPv4", 00:19:42.044 "traddr": "10.0.0.1", 00:19:42.044 "trsvcid": "41170" 00:19:42.044 }, 00:19:42.044 "auth": { 00:19:42.044 "state": "completed", 00:19:42.044 "digest": "sha256", 00:19:42.044 "dhgroup": "null" 00:19:42.044 } 00:19:42.044 } 00:19:42.044 ]' 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.044 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.301 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:19:42.301 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:19:42.868 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.868 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.868 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.868 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.868 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.868 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.868 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:42.868 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.125 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.382 00:19:43.382 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.382 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.382 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.639 { 00:19:43.639 "cntlid": 5, 00:19:43.639 "qid": 0, 00:19:43.639 "state": "enabled", 00:19:43.639 "thread": "nvmf_tgt_poll_group_000", 00:19:43.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:43.639 "listen_address": { 00:19:43.639 "trtype": "TCP", 00:19:43.639 "adrfam": "IPv4", 00:19:43.639 "traddr": "10.0.0.2", 00:19:43.639 "trsvcid": "4420" 00:19:43.639 }, 00:19:43.639 "peer_address": { 00:19:43.639 "trtype": "TCP", 00:19:43.639 "adrfam": "IPv4", 00:19:43.639 "traddr": "10.0.0.1", 00:19:43.639 "trsvcid": "41192" 00:19:43.639 }, 00:19:43.639 "auth": { 00:19:43.639 "state": "completed", 00:19:43.639 "digest": "sha256", 00:19:43.639 "dhgroup": "null" 00:19:43.639 } 00:19:43.639 } 00:19:43.639 ]' 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.639 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.897 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:19:43.897 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:19:44.463 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.463 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:44.463 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.463 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.463 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.463 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.463 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:44.463 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:44.720 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:44.720 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.721 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.721 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:44.721 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:44.721 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.721 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:44.721 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.721 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.721 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.721 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:44.721 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.721 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.977 00:19:44.978 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.978 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.978 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.236 { 00:19:45.236 "cntlid": 7, 00:19:45.236 "qid": 0, 00:19:45.236 "state": "enabled", 00:19:45.236 "thread": "nvmf_tgt_poll_group_000", 00:19:45.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:45.236 "listen_address": { 00:19:45.236 "trtype": "TCP", 00:19:45.236 "adrfam": "IPv4", 00:19:45.236 "traddr": "10.0.0.2", 00:19:45.236 "trsvcid": "4420" 00:19:45.236 }, 00:19:45.236 "peer_address": { 00:19:45.236 "trtype": "TCP", 00:19:45.236 "adrfam": "IPv4", 00:19:45.236 "traddr": "10.0.0.1", 00:19:45.236 "trsvcid": "41208" 00:19:45.236 }, 00:19:45.236 "auth": { 00:19:45.236 "state": "completed", 00:19:45.236 "digest": "sha256", 00:19:45.236 "dhgroup": "null" 00:19:45.236 } 00:19:45.236 } 00:19:45.236 ]' 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.236 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.494 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:19:45.494 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:19:46.059 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.059 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:46.059 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.059 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.059 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.059 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.059 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.059 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.059 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.317 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:46.318 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.318 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.318 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:46.318 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.318 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.318 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.318 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.318 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.318 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.318 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.318 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.318 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.575 00:19:46.575 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.575 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.575 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.832 { 00:19:46.832 "cntlid": 9, 00:19:46.832 "qid": 0, 00:19:46.832 "state": "enabled", 00:19:46.832 "thread": "nvmf_tgt_poll_group_000", 00:19:46.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:46.832 "listen_address": { 00:19:46.832 "trtype": "TCP", 00:19:46.832 "adrfam": "IPv4", 00:19:46.832 "traddr": "10.0.0.2", 00:19:46.832 "trsvcid": "4420" 00:19:46.832 }, 00:19:46.832 "peer_address": { 00:19:46.832 "trtype": "TCP", 00:19:46.832 "adrfam": "IPv4", 00:19:46.832 "traddr": "10.0.0.1", 00:19:46.832 "trsvcid": "35836" 00:19:46.832 }, 00:19:46.832 "auth": { 00:19:46.832 "state": "completed", 00:19:46.832 "digest": "sha256", 00:19:46.832 "dhgroup": "ffdhe2048" 00:19:46.832 } 00:19:46.832 } 00:19:46.832 ]' 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.832 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.089 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:19:47.089 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:19:47.653 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.653 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.653 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.653 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.653 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.653 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.653 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.653 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.911 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.168 00:19:48.168 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.169 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.169 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.426 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.426 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.426 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.426 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.426 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.426 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.426 { 00:19:48.426 "cntlid": 11, 00:19:48.426 "qid": 0, 00:19:48.426 "state": "enabled", 00:19:48.426 "thread": "nvmf_tgt_poll_group_000", 00:19:48.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:48.426 "listen_address": { 00:19:48.426 "trtype": "TCP", 00:19:48.426 "adrfam": "IPv4", 00:19:48.426 "traddr": "10.0.0.2", 00:19:48.426 "trsvcid": "4420" 00:19:48.426 }, 00:19:48.426 "peer_address": { 00:19:48.426 "trtype": "TCP", 00:19:48.426 "adrfam": "IPv4", 00:19:48.426 "traddr": "10.0.0.1", 00:19:48.426 "trsvcid": "35858" 00:19:48.426 }, 00:19:48.426 "auth": { 00:19:48.426 "state": "completed", 00:19:48.426 "digest": "sha256", 00:19:48.426 "dhgroup": "ffdhe2048" 00:19:48.426 } 00:19:48.426 } 00:19:48.426 ]' 00:19:48.426 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.426 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.426 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.426 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.426 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.426 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.426 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.426 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.683 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:19:48.683 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:19:49.247 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.248 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:49.248 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.248 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.248 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.248 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.248 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:49.248 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.505 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.763 00:19:49.763 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.763 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.763 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.763 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.763 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.763 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.763 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.020 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.020 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.020 { 00:19:50.020 "cntlid": 13, 00:19:50.020 "qid": 0, 00:19:50.020 "state": "enabled", 00:19:50.020 "thread": "nvmf_tgt_poll_group_000", 00:19:50.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:50.020 "listen_address": { 00:19:50.020 "trtype": "TCP", 00:19:50.020 "adrfam": "IPv4", 00:19:50.020 "traddr": "10.0.0.2", 00:19:50.020 "trsvcid": "4420" 00:19:50.020 }, 00:19:50.020 "peer_address": { 00:19:50.020 "trtype": "TCP", 00:19:50.020 "adrfam": "IPv4", 00:19:50.020 "traddr": "10.0.0.1", 00:19:50.020 "trsvcid": "35884" 00:19:50.020 }, 00:19:50.020 "auth": { 00:19:50.020 "state": "completed", 00:19:50.020 "digest": "sha256", 00:19:50.020 "dhgroup": "ffdhe2048" 00:19:50.020 } 00:19:50.020 } 00:19:50.020 ]' 00:19:50.020 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.021 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.021 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.021 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:50.021 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.021 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.021 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.021 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.278 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:19:50.278 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:19:50.842 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.842 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:50.842 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.842 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.842 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.842 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.842 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.842 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.100 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.357 00:19:51.357 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.357 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.357 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.615 { 00:19:51.615 "cntlid": 15, 00:19:51.615 "qid": 0, 00:19:51.615 "state": "enabled", 00:19:51.615 "thread": "nvmf_tgt_poll_group_000", 00:19:51.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:51.615 "listen_address": { 00:19:51.615 "trtype": "TCP", 00:19:51.615 "adrfam": "IPv4", 00:19:51.615 "traddr": "10.0.0.2", 00:19:51.615 "trsvcid": "4420" 00:19:51.615 }, 00:19:51.615 "peer_address": { 00:19:51.615 "trtype": "TCP", 00:19:51.615 "adrfam": "IPv4", 00:19:51.615 "traddr": "10.0.0.1", 00:19:51.615 "trsvcid": "35902" 00:19:51.615 }, 00:19:51.615 "auth": { 00:19:51.615 "state": "completed", 00:19:51.615 "digest": "sha256", 00:19:51.615 "dhgroup": "ffdhe2048" 00:19:51.615 } 00:19:51.615 } 00:19:51.615 ]' 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.615 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.873 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:19:51.873 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:19:52.444 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.444 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:52.444 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.444 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.444 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.444 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.444 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.444 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.444 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.701 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.959 00:19:52.959 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.959 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.959 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.959 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.959 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.959 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.959 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.959 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.959 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.959 { 00:19:52.959 "cntlid": 17, 00:19:52.959 "qid": 0, 00:19:52.959 "state": "enabled", 00:19:52.959 "thread": "nvmf_tgt_poll_group_000", 00:19:52.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:52.959 "listen_address": { 00:19:52.959 "trtype": "TCP", 00:19:52.959 "adrfam": "IPv4", 00:19:52.959 "traddr": "10.0.0.2", 00:19:52.959 "trsvcid": "4420" 00:19:52.959 }, 00:19:52.959 "peer_address": { 00:19:52.959 "trtype": "TCP", 00:19:52.959 "adrfam": "IPv4", 00:19:52.959 "traddr": "10.0.0.1", 00:19:52.959 "trsvcid": "35928" 00:19:52.959 }, 00:19:52.959 "auth": { 00:19:52.959 "state": "completed", 00:19:52.959 "digest": "sha256", 00:19:52.959 "dhgroup": "ffdhe3072" 00:19:52.959 } 00:19:52.959 } 00:19:52.959 ]' 00:19:52.959 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.217 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.217 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.217 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.217 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.217 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.217 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.217 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.474 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:19:53.474 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:19:54.038 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.038 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:54.038 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.038 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.039 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.039 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.039 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.039 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.296 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.554 00:19:54.554 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.554 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.554 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.813 { 00:19:54.813 "cntlid": 19, 00:19:54.813 "qid": 0, 00:19:54.813 "state": "enabled", 00:19:54.813 "thread": "nvmf_tgt_poll_group_000", 00:19:54.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:54.813 "listen_address": { 00:19:54.813 "trtype": "TCP", 00:19:54.813 "adrfam": "IPv4", 00:19:54.813 "traddr": "10.0.0.2", 00:19:54.813 "trsvcid": "4420" 00:19:54.813 }, 00:19:54.813 "peer_address": { 00:19:54.813 "trtype": "TCP", 00:19:54.813 "adrfam": "IPv4", 00:19:54.813 "traddr": "10.0.0.1", 00:19:54.813 "trsvcid": "35956" 00:19:54.813 }, 00:19:54.813 "auth": { 00:19:54.813 "state": "completed", 00:19:54.813 "digest": "sha256", 00:19:54.813 "dhgroup": "ffdhe3072" 00:19:54.813 } 00:19:54.813 } 00:19:54.813 ]' 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.813 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.071 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:19:55.071 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:19:55.636 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.636 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:55.636 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.636 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.636 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.636 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.636 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:55.636 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.893 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.151 00:19:56.151 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.151 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.151 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.409 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.409 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.409 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.409 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.409 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.409 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.409 { 00:19:56.409 "cntlid": 21, 00:19:56.409 "qid": 0, 00:19:56.409 "state": "enabled", 00:19:56.409 "thread": "nvmf_tgt_poll_group_000", 00:19:56.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:56.409 "listen_address": { 00:19:56.409 "trtype": "TCP", 00:19:56.409 "adrfam": "IPv4", 00:19:56.409 "traddr": "10.0.0.2", 00:19:56.409 "trsvcid": "4420" 00:19:56.409 }, 00:19:56.409 "peer_address": { 00:19:56.409 "trtype": "TCP", 00:19:56.409 "adrfam": "IPv4", 00:19:56.409 "traddr": "10.0.0.1", 00:19:56.409 "trsvcid": "50740" 00:19:56.409 }, 00:19:56.409 "auth": { 00:19:56.409 "state": "completed", 00:19:56.409 "digest": "sha256", 00:19:56.409 "dhgroup": "ffdhe3072" 00:19:56.409 } 00:19:56.409 } 00:19:56.409 ]' 00:19:56.409 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.409 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.409 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.409 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:56.409 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.409 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.409 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.409 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.667 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:19:56.667 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:19:57.234 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.234 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:57.234 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.234 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.234 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.234 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.234 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.234 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.491 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:57.491 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.491 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.491 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:57.491 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:57.491 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.492 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:57.492 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.492 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.492 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.492 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.492 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.748 00:19:57.748 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.748 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.748 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.748 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.748 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.749 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.749 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.005 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.005 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.005 { 00:19:58.005 "cntlid": 23, 00:19:58.005 "qid": 0, 00:19:58.005 "state": "enabled", 00:19:58.005 "thread": "nvmf_tgt_poll_group_000", 00:19:58.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:58.005 "listen_address": { 00:19:58.005 "trtype": "TCP", 00:19:58.005 "adrfam": "IPv4", 00:19:58.005 "traddr": "10.0.0.2", 00:19:58.005 "trsvcid": "4420" 00:19:58.005 }, 00:19:58.005 "peer_address": { 00:19:58.006 "trtype": "TCP", 00:19:58.006 "adrfam": "IPv4", 00:19:58.006 "traddr": "10.0.0.1", 00:19:58.006 "trsvcid": "50780" 00:19:58.006 }, 00:19:58.006 "auth": { 00:19:58.006 "state": "completed", 00:19:58.006 "digest": "sha256", 00:19:58.006 "dhgroup": "ffdhe3072" 00:19:58.006 } 00:19:58.006 } 00:19:58.006 ]' 00:19:58.006 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.006 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.006 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.006 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.006 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.006 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.006 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.006 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.263 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:19:58.263 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:19:58.828 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.828 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:58.828 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.828 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.828 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.828 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.828 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.828 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.828 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.087 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.345 00:19:59.345 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.345 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.345 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.602 { 00:19:59.602 "cntlid": 25, 00:19:59.602 "qid": 0, 00:19:59.602 "state": "enabled", 00:19:59.602 "thread": "nvmf_tgt_poll_group_000", 00:19:59.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:59.602 "listen_address": { 00:19:59.602 "trtype": "TCP", 00:19:59.602 "adrfam": "IPv4", 00:19:59.602 "traddr": "10.0.0.2", 00:19:59.602 "trsvcid": "4420" 00:19:59.602 }, 00:19:59.602 "peer_address": { 00:19:59.602 "trtype": "TCP", 00:19:59.602 "adrfam": "IPv4", 00:19:59.602 "traddr": "10.0.0.1", 00:19:59.602 "trsvcid": "50814" 00:19:59.602 }, 00:19:59.602 "auth": { 00:19:59.602 "state": "completed", 00:19:59.602 "digest": "sha256", 00:19:59.602 "dhgroup": "ffdhe4096" 00:19:59.602 } 00:19:59.602 } 00:19:59.602 ]' 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.602 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.860 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:19:59.860 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:00.425 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.425 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:00.425 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.425 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.425 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.425 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.425 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.425 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.683 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.940 00:20:00.940 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.941 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.941 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.198 { 00:20:01.198 "cntlid": 27, 00:20:01.198 "qid": 0, 00:20:01.198 "state": "enabled", 00:20:01.198 "thread": "nvmf_tgt_poll_group_000", 00:20:01.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:01.198 "listen_address": { 00:20:01.198 "trtype": "TCP", 00:20:01.198 "adrfam": "IPv4", 00:20:01.198 "traddr": "10.0.0.2", 00:20:01.198 "trsvcid": "4420" 00:20:01.198 }, 00:20:01.198 "peer_address": { 00:20:01.198 "trtype": "TCP", 00:20:01.198 "adrfam": "IPv4", 00:20:01.198 "traddr": "10.0.0.1", 00:20:01.198 "trsvcid": "50856" 00:20:01.198 }, 00:20:01.198 "auth": { 00:20:01.198 "state": "completed", 00:20:01.198 "digest": "sha256", 00:20:01.198 "dhgroup": "ffdhe4096" 00:20:01.198 } 00:20:01.198 } 00:20:01.198 ]' 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.198 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.455 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:01.455 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:02.021 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.021 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:02.021 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.021 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.021 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.021 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.021 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.021 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.280 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.538 00:20:02.538 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.538 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.538 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.795 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.795 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.795 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.795 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.795 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.795 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.795 { 00:20:02.795 "cntlid": 29, 00:20:02.795 "qid": 0, 00:20:02.795 "state": "enabled", 00:20:02.795 "thread": "nvmf_tgt_poll_group_000", 00:20:02.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:02.795 "listen_address": { 00:20:02.795 "trtype": "TCP", 00:20:02.795 "adrfam": "IPv4", 00:20:02.795 "traddr": "10.0.0.2", 00:20:02.795 "trsvcid": "4420" 00:20:02.795 }, 00:20:02.795 "peer_address": { 00:20:02.795 "trtype": "TCP", 00:20:02.795 "adrfam": "IPv4", 00:20:02.795 "traddr": "10.0.0.1", 00:20:02.795 "trsvcid": "50880" 00:20:02.795 }, 00:20:02.795 "auth": { 00:20:02.795 "state": "completed", 00:20:02.795 "digest": "sha256", 00:20:02.795 "dhgroup": "ffdhe4096" 00:20:02.795 } 00:20:02.795 } 00:20:02.795 ]' 00:20:02.795 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.796 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.796 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.796 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.796 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.796 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.796 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.796 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.053 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:03.053 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:03.619 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.619 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:03.619 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.619 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.619 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.620 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.620 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:03.620 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.878 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.136 00:20:04.136 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.136 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.136 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.393 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.393 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.393 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.393 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.393 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.393 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.393 { 00:20:04.393 "cntlid": 31, 00:20:04.393 "qid": 0, 00:20:04.393 "state": "enabled", 00:20:04.393 "thread": "nvmf_tgt_poll_group_000", 00:20:04.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:04.393 "listen_address": { 00:20:04.393 "trtype": "TCP", 00:20:04.393 "adrfam": "IPv4", 00:20:04.393 "traddr": "10.0.0.2", 00:20:04.393 "trsvcid": "4420" 00:20:04.393 }, 00:20:04.393 "peer_address": { 00:20:04.393 "trtype": "TCP", 00:20:04.393 "adrfam": "IPv4", 00:20:04.393 "traddr": "10.0.0.1", 00:20:04.393 "trsvcid": "50904" 00:20:04.393 }, 00:20:04.393 "auth": { 00:20:04.393 "state": "completed", 00:20:04.393 "digest": "sha256", 00:20:04.393 "dhgroup": "ffdhe4096" 00:20:04.393 } 00:20:04.393 } 00:20:04.393 ]' 00:20:04.393 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.393 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.393 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.393 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.393 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.393 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.393 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.393 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.651 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:04.651 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:05.230 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.230 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:05.230 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.230 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.230 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.230 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.230 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.230 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.230 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.488 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.747 00:20:05.747 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.747 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.747 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.005 { 00:20:06.005 "cntlid": 33, 00:20:06.005 "qid": 0, 00:20:06.005 "state": "enabled", 00:20:06.005 "thread": "nvmf_tgt_poll_group_000", 00:20:06.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:06.005 "listen_address": { 00:20:06.005 "trtype": "TCP", 00:20:06.005 "adrfam": "IPv4", 00:20:06.005 "traddr": "10.0.0.2", 00:20:06.005 "trsvcid": "4420" 00:20:06.005 }, 00:20:06.005 "peer_address": { 00:20:06.005 "trtype": "TCP", 00:20:06.005 "adrfam": "IPv4", 00:20:06.005 "traddr": "10.0.0.1", 00:20:06.005 "trsvcid": "44086" 00:20:06.005 }, 00:20:06.005 "auth": { 00:20:06.005 "state": "completed", 00:20:06.005 "digest": "sha256", 00:20:06.005 "dhgroup": "ffdhe6144" 00:20:06.005 } 00:20:06.005 } 00:20:06.005 ]' 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.005 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.263 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:06.263 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:06.828 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.828 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:06.828 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.828 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.828 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.828 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.828 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.828 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.086 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.344 00:20:07.344 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.344 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.344 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.601 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.601 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.601 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.601 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.601 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.601 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.601 { 00:20:07.601 "cntlid": 35, 00:20:07.601 "qid": 0, 00:20:07.601 "state": "enabled", 00:20:07.601 "thread": "nvmf_tgt_poll_group_000", 00:20:07.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:07.601 "listen_address": { 00:20:07.601 "trtype": "TCP", 00:20:07.601 "adrfam": "IPv4", 00:20:07.601 "traddr": "10.0.0.2", 00:20:07.601 "trsvcid": "4420" 00:20:07.601 }, 00:20:07.601 "peer_address": { 00:20:07.601 "trtype": "TCP", 00:20:07.601 "adrfam": "IPv4", 00:20:07.601 "traddr": "10.0.0.1", 00:20:07.601 "trsvcid": "44102" 00:20:07.601 }, 00:20:07.601 "auth": { 00:20:07.601 "state": "completed", 00:20:07.601 "digest": "sha256", 00:20:07.601 "dhgroup": "ffdhe6144" 00:20:07.601 } 00:20:07.601 } 00:20:07.601 ]' 00:20:07.601 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.601 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.601 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.859 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:07.859 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.859 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.859 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.859 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.859 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:07.859 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:08.425 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.682 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:08.682 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.682 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.682 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.682 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.682 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.682 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.682 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:08.682 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.682 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.682 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:08.683 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:08.683 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.683 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.683 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.683 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.683 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.683 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.683 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.683 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.248 00:20:09.248 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.248 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.248 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.248 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.249 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.249 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.249 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.249 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.249 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.249 { 00:20:09.249 "cntlid": 37, 00:20:09.249 "qid": 0, 00:20:09.249 "state": "enabled", 00:20:09.249 "thread": "nvmf_tgt_poll_group_000", 00:20:09.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:09.249 "listen_address": { 00:20:09.249 "trtype": "TCP", 00:20:09.249 "adrfam": "IPv4", 00:20:09.249 "traddr": "10.0.0.2", 00:20:09.249 "trsvcid": "4420" 00:20:09.249 }, 00:20:09.249 "peer_address": { 00:20:09.249 "trtype": "TCP", 00:20:09.249 "adrfam": "IPv4", 00:20:09.249 "traddr": "10.0.0.1", 00:20:09.249 "trsvcid": "44142" 00:20:09.249 }, 00:20:09.249 "auth": { 00:20:09.249 "state": "completed", 00:20:09.249 "digest": "sha256", 00:20:09.249 "dhgroup": "ffdhe6144" 00:20:09.249 } 00:20:09.249 } 00:20:09.249 ]' 00:20:09.249 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.507 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.507 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.507 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:09.507 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.507 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.507 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.507 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.768 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:09.768 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:10.401 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.401 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:10.401 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.401 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.401 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.401 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.401 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:10.401 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.401 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.687 00:20:10.955 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.955 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.955 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.955 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.955 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.955 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.955 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.955 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.955 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.955 { 00:20:10.955 "cntlid": 39, 00:20:10.955 "qid": 0, 00:20:10.955 "state": "enabled", 00:20:10.955 "thread": "nvmf_tgt_poll_group_000", 00:20:10.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:10.955 "listen_address": { 00:20:10.955 "trtype": "TCP", 00:20:10.955 "adrfam": "IPv4", 00:20:10.955 "traddr": "10.0.0.2", 00:20:10.955 "trsvcid": "4420" 00:20:10.955 }, 00:20:10.955 "peer_address": { 00:20:10.955 "trtype": "TCP", 00:20:10.955 "adrfam": "IPv4", 00:20:10.955 "traddr": "10.0.0.1", 00:20:10.955 "trsvcid": "44174" 00:20:10.955 }, 00:20:10.955 "auth": { 00:20:10.955 "state": "completed", 00:20:10.955 "digest": "sha256", 00:20:10.955 "dhgroup": "ffdhe6144" 00:20:10.955 } 00:20:10.955 } 00:20:10.955 ]' 00:20:10.955 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.955 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.955 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.242 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.242 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.242 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.242 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.242 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.242 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:11.242 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:11.812 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.812 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:11.812 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.812 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.812 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.812 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.812 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.812 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:11.812 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.071 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.637 00:20:12.638 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.638 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.638 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.897 { 00:20:12.897 "cntlid": 41, 00:20:12.897 "qid": 0, 00:20:12.897 "state": "enabled", 00:20:12.897 "thread": "nvmf_tgt_poll_group_000", 00:20:12.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:12.897 "listen_address": { 00:20:12.897 "trtype": "TCP", 00:20:12.897 "adrfam": "IPv4", 00:20:12.897 "traddr": "10.0.0.2", 00:20:12.897 "trsvcid": "4420" 00:20:12.897 }, 00:20:12.897 "peer_address": { 00:20:12.897 "trtype": "TCP", 00:20:12.897 "adrfam": "IPv4", 00:20:12.897 "traddr": "10.0.0.1", 00:20:12.897 "trsvcid": "44200" 00:20:12.897 }, 00:20:12.897 "auth": { 00:20:12.897 "state": "completed", 00:20:12.897 "digest": "sha256", 00:20:12.897 "dhgroup": "ffdhe8192" 00:20:12.897 } 00:20:12.897 } 00:20:12.897 ]' 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.897 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.156 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:13.156 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:13.724 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.724 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:13.724 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.724 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.724 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.724 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.724 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.724 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.982 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:13.982 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.982 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.982 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:13.982 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.982 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.982 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.982 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.982 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.982 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.982 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.983 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.983 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.241 00:20:14.241 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.241 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.241 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.500 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.500 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.500 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.500 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.500 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.500 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.500 { 00:20:14.500 "cntlid": 43, 00:20:14.500 "qid": 0, 00:20:14.500 "state": "enabled", 00:20:14.500 "thread": "nvmf_tgt_poll_group_000", 00:20:14.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:14.500 "listen_address": { 00:20:14.500 "trtype": "TCP", 00:20:14.500 "adrfam": "IPv4", 00:20:14.500 "traddr": "10.0.0.2", 00:20:14.500 "trsvcid": "4420" 00:20:14.500 }, 00:20:14.500 "peer_address": { 00:20:14.500 "trtype": "TCP", 00:20:14.500 "adrfam": "IPv4", 00:20:14.500 "traddr": "10.0.0.1", 00:20:14.500 "trsvcid": "44232" 00:20:14.500 }, 00:20:14.500 "auth": { 00:20:14.500 "state": "completed", 00:20:14.500 "digest": "sha256", 00:20:14.500 "dhgroup": "ffdhe8192" 00:20:14.500 } 00:20:14.500 } 00:20:14.500 ]' 00:20:14.500 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.500 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.500 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.760 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.760 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.760 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.760 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.760 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.019 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:15.019 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.587 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.154 00:20:16.154 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.154 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.154 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.413 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.413 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.413 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.413 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.413 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.413 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.413 { 00:20:16.413 "cntlid": 45, 00:20:16.413 "qid": 0, 00:20:16.413 "state": "enabled", 00:20:16.413 "thread": "nvmf_tgt_poll_group_000", 00:20:16.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:16.413 "listen_address": { 00:20:16.413 "trtype": "TCP", 00:20:16.413 "adrfam": "IPv4", 00:20:16.413 "traddr": "10.0.0.2", 00:20:16.413 "trsvcid": "4420" 00:20:16.413 }, 00:20:16.413 "peer_address": { 00:20:16.413 "trtype": "TCP", 00:20:16.413 "adrfam": "IPv4", 00:20:16.413 "traddr": "10.0.0.1", 00:20:16.413 "trsvcid": "40252" 00:20:16.413 }, 00:20:16.413 "auth": { 00:20:16.413 "state": "completed", 00:20:16.413 "digest": "sha256", 00:20:16.413 "dhgroup": "ffdhe8192" 00:20:16.413 } 00:20:16.413 } 00:20:16.413 ]' 00:20:16.413 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.413 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.413 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.413 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:16.413 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.413 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.413 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.413 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.670 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:16.670 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:17.238 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.238 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:17.238 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.238 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.238 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.238 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.238 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:17.238 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.496 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.063 00:20:18.063 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.063 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.063 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.063 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.063 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.063 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.063 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.063 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.063 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.063 { 00:20:18.063 "cntlid": 47, 00:20:18.063 "qid": 0, 00:20:18.063 "state": "enabled", 00:20:18.063 "thread": "nvmf_tgt_poll_group_000", 00:20:18.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:18.063 "listen_address": { 00:20:18.063 "trtype": "TCP", 00:20:18.063 "adrfam": "IPv4", 00:20:18.063 "traddr": "10.0.0.2", 00:20:18.063 "trsvcid": "4420" 00:20:18.063 }, 00:20:18.063 "peer_address": { 00:20:18.063 "trtype": "TCP", 00:20:18.063 "adrfam": "IPv4", 00:20:18.063 "traddr": "10.0.0.1", 00:20:18.063 "trsvcid": "40278" 00:20:18.063 }, 00:20:18.063 "auth": { 00:20:18.063 "state": "completed", 00:20:18.063 "digest": "sha256", 00:20:18.063 "dhgroup": "ffdhe8192" 00:20:18.063 } 00:20:18.063 } 00:20:18.063 ]' 00:20:18.063 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.063 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.063 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.321 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.321 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.321 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.321 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.321 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.321 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:18.321 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:18.887 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.144 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.145 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.403 00:20:19.403 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.403 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.403 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.661 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.661 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.661 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.661 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.661 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.661 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.661 { 00:20:19.661 "cntlid": 49, 00:20:19.661 "qid": 0, 00:20:19.661 "state": "enabled", 00:20:19.661 "thread": "nvmf_tgt_poll_group_000", 00:20:19.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:19.661 "listen_address": { 00:20:19.661 "trtype": "TCP", 00:20:19.661 "adrfam": "IPv4", 00:20:19.661 "traddr": "10.0.0.2", 00:20:19.661 "trsvcid": "4420" 00:20:19.661 }, 00:20:19.661 "peer_address": { 00:20:19.661 "trtype": "TCP", 00:20:19.661 "adrfam": "IPv4", 00:20:19.661 "traddr": "10.0.0.1", 00:20:19.661 "trsvcid": "40314" 00:20:19.661 }, 00:20:19.661 "auth": { 00:20:19.661 "state": "completed", 00:20:19.661 "digest": "sha384", 00:20:19.661 "dhgroup": "null" 00:20:19.661 } 00:20:19.661 } 00:20:19.661 ]' 00:20:19.661 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.661 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.661 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.920 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:19.920 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.920 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.920 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.920 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.920 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:19.920 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:20.486 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.486 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:20.486 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.486 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.744 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.744 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.744 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.744 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.744 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:20.744 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.744 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.744 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:20.744 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:20.744 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.745 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.745 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.745 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.745 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.745 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.745 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.745 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.003 00:20:21.003 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.003 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.003 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.261 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.261 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.261 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.261 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.261 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.261 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.262 { 00:20:21.262 "cntlid": 51, 00:20:21.262 "qid": 0, 00:20:21.262 "state": "enabled", 00:20:21.262 "thread": "nvmf_tgt_poll_group_000", 00:20:21.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:21.262 "listen_address": { 00:20:21.262 "trtype": "TCP", 00:20:21.262 "adrfam": "IPv4", 00:20:21.262 "traddr": "10.0.0.2", 00:20:21.262 "trsvcid": "4420" 00:20:21.262 }, 00:20:21.262 "peer_address": { 00:20:21.262 "trtype": "TCP", 00:20:21.262 "adrfam": "IPv4", 00:20:21.262 "traddr": "10.0.0.1", 00:20:21.262 "trsvcid": "40336" 00:20:21.262 }, 00:20:21.262 "auth": { 00:20:21.262 "state": "completed", 00:20:21.262 "digest": "sha384", 00:20:21.262 "dhgroup": "null" 00:20:21.262 } 00:20:21.262 } 00:20:21.262 ]' 00:20:21.262 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.262 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.262 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.262 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:21.262 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.520 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.520 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.520 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.520 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:21.520 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:22.085 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.085 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:22.085 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.085 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.085 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.343 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.343 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:22.343 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.343 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.602 00:20:22.602 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.602 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.602 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.860 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.860 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.860 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.860 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.860 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.860 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.860 { 00:20:22.860 "cntlid": 53, 00:20:22.860 "qid": 0, 00:20:22.860 "state": "enabled", 00:20:22.860 "thread": "nvmf_tgt_poll_group_000", 00:20:22.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:22.860 "listen_address": { 00:20:22.860 "trtype": "TCP", 00:20:22.860 "adrfam": "IPv4", 00:20:22.860 "traddr": "10.0.0.2", 00:20:22.860 "trsvcid": "4420" 00:20:22.860 }, 00:20:22.860 "peer_address": { 00:20:22.861 "trtype": "TCP", 00:20:22.861 "adrfam": "IPv4", 00:20:22.861 "traddr": "10.0.0.1", 00:20:22.861 "trsvcid": "40360" 00:20:22.861 }, 00:20:22.861 "auth": { 00:20:22.861 "state": "completed", 00:20:22.861 "digest": "sha384", 00:20:22.861 "dhgroup": "null" 00:20:22.861 } 00:20:22.861 } 00:20:22.861 ]' 00:20:22.861 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.861 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.861 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.861 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:22.861 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.119 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.119 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.119 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.119 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:23.119 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:23.685 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.685 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:23.685 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.685 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.685 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.685 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.685 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:23.685 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.943 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.201 00:20:24.201 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.201 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.201 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.460 { 00:20:24.460 "cntlid": 55, 00:20:24.460 "qid": 0, 00:20:24.460 "state": "enabled", 00:20:24.460 "thread": "nvmf_tgt_poll_group_000", 00:20:24.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:24.460 "listen_address": { 00:20:24.460 "trtype": "TCP", 00:20:24.460 "adrfam": "IPv4", 00:20:24.460 "traddr": "10.0.0.2", 00:20:24.460 "trsvcid": "4420" 00:20:24.460 }, 00:20:24.460 "peer_address": { 00:20:24.460 "trtype": "TCP", 00:20:24.460 "adrfam": "IPv4", 00:20:24.460 "traddr": "10.0.0.1", 00:20:24.460 "trsvcid": "40380" 00:20:24.460 }, 00:20:24.460 "auth": { 00:20:24.460 "state": "completed", 00:20:24.460 "digest": "sha384", 00:20:24.460 "dhgroup": "null" 00:20:24.460 } 00:20:24.460 } 00:20:24.460 ]' 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.460 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.718 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:24.718 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:25.284 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.284 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:25.284 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.284 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.284 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.284 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.284 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.284 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.284 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.542 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.801 00:20:25.801 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.801 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.801 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.107 { 00:20:26.107 "cntlid": 57, 00:20:26.107 "qid": 0, 00:20:26.107 "state": "enabled", 00:20:26.107 "thread": "nvmf_tgt_poll_group_000", 00:20:26.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:26.107 "listen_address": { 00:20:26.107 "trtype": "TCP", 00:20:26.107 "adrfam": "IPv4", 00:20:26.107 "traddr": "10.0.0.2", 00:20:26.107 "trsvcid": "4420" 00:20:26.107 }, 00:20:26.107 "peer_address": { 00:20:26.107 "trtype": "TCP", 00:20:26.107 "adrfam": "IPv4", 00:20:26.107 "traddr": "10.0.0.1", 00:20:26.107 "trsvcid": "43850" 00:20:26.107 }, 00:20:26.107 "auth": { 00:20:26.107 "state": "completed", 00:20:26.107 "digest": "sha384", 00:20:26.107 "dhgroup": "ffdhe2048" 00:20:26.107 } 00:20:26.107 } 00:20:26.107 ]' 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.107 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.365 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:26.365 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:26.929 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.929 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:26.929 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.929 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.929 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.929 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.929 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:26.929 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.186 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:27.187 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.187 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.187 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:27.187 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.187 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.187 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.187 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.187 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.187 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.187 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.187 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.187 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.445 00:20:27.445 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.445 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.445 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.445 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.445 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.445 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.445 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.445 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.445 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.445 { 00:20:27.445 "cntlid": 59, 00:20:27.445 "qid": 0, 00:20:27.445 "state": "enabled", 00:20:27.445 "thread": "nvmf_tgt_poll_group_000", 00:20:27.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:27.445 "listen_address": { 00:20:27.445 "trtype": "TCP", 00:20:27.445 "adrfam": "IPv4", 00:20:27.445 "traddr": "10.0.0.2", 00:20:27.445 "trsvcid": "4420" 00:20:27.445 }, 00:20:27.445 "peer_address": { 00:20:27.445 "trtype": "TCP", 00:20:27.445 "adrfam": "IPv4", 00:20:27.445 "traddr": "10.0.0.1", 00:20:27.445 "trsvcid": "43890" 00:20:27.445 }, 00:20:27.445 "auth": { 00:20:27.445 "state": "completed", 00:20:27.445 "digest": "sha384", 00:20:27.445 "dhgroup": "ffdhe2048" 00:20:27.445 } 00:20:27.445 } 00:20:27.445 ]' 00:20:27.445 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.703 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.703 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.703 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.703 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.703 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.703 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.703 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.961 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:27.961 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.527 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.528 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.786 00:20:28.786 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.786 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.786 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.044 { 00:20:29.044 "cntlid": 61, 00:20:29.044 "qid": 0, 00:20:29.044 "state": "enabled", 00:20:29.044 "thread": "nvmf_tgt_poll_group_000", 00:20:29.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:29.044 "listen_address": { 00:20:29.044 "trtype": "TCP", 00:20:29.044 "adrfam": "IPv4", 00:20:29.044 "traddr": "10.0.0.2", 00:20:29.044 "trsvcid": "4420" 00:20:29.044 }, 00:20:29.044 "peer_address": { 00:20:29.044 "trtype": "TCP", 00:20:29.044 "adrfam": "IPv4", 00:20:29.044 "traddr": "10.0.0.1", 00:20:29.044 "trsvcid": "43904" 00:20:29.044 }, 00:20:29.044 "auth": { 00:20:29.044 "state": "completed", 00:20:29.044 "digest": "sha384", 00:20:29.044 "dhgroup": "ffdhe2048" 00:20:29.044 } 00:20:29.044 } 00:20:29.044 ]' 00:20:29.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.303 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:29.303 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.303 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.303 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.303 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.303 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:29.303 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:29.868 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.868 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:29.868 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.868 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.868 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.868 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.868 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:29.868 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.126 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.385 00:20:30.385 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.385 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.385 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.643 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.643 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.643 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.643 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.643 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.643 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.643 { 00:20:30.643 "cntlid": 63, 00:20:30.643 "qid": 0, 00:20:30.643 "state": "enabled", 00:20:30.643 "thread": "nvmf_tgt_poll_group_000", 00:20:30.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:30.643 "listen_address": { 00:20:30.643 "trtype": "TCP", 00:20:30.643 "adrfam": "IPv4", 00:20:30.643 "traddr": "10.0.0.2", 00:20:30.643 "trsvcid": "4420" 00:20:30.643 }, 00:20:30.643 "peer_address": { 00:20:30.643 "trtype": "TCP", 00:20:30.643 "adrfam": "IPv4", 00:20:30.643 "traddr": "10.0.0.1", 00:20:30.643 "trsvcid": "43934" 00:20:30.643 }, 00:20:30.643 "auth": { 00:20:30.643 "state": "completed", 00:20:30.643 "digest": "sha384", 00:20:30.643 "dhgroup": "ffdhe2048" 00:20:30.643 } 00:20:30.643 } 00:20:30.643 ]' 00:20:30.643 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.643 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.643 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.643 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.643 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.644 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.644 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.644 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.901 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:30.901 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:31.466 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.466 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:31.466 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.466 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.466 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.466 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.466 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.466 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.467 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.729 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.987 00:20:31.987 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.987 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.987 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.246 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.246 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.246 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.246 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.246 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.246 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.246 { 00:20:32.246 "cntlid": 65, 00:20:32.246 "qid": 0, 00:20:32.246 "state": "enabled", 00:20:32.246 "thread": "nvmf_tgt_poll_group_000", 00:20:32.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:32.246 "listen_address": { 00:20:32.246 "trtype": "TCP", 00:20:32.246 "adrfam": "IPv4", 00:20:32.246 "traddr": "10.0.0.2", 00:20:32.246 "trsvcid": "4420" 00:20:32.246 }, 00:20:32.246 "peer_address": { 00:20:32.246 "trtype": "TCP", 00:20:32.246 "adrfam": "IPv4", 00:20:32.246 "traddr": "10.0.0.1", 00:20:32.246 "trsvcid": "43956" 00:20:32.246 }, 00:20:32.246 "auth": { 00:20:32.246 "state": "completed", 00:20:32.246 "digest": "sha384", 00:20:32.246 "dhgroup": "ffdhe3072" 00:20:32.246 } 00:20:32.246 } 00:20:32.246 ]' 00:20:32.246 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.246 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.246 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.246 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.246 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.505 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.505 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.505 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.505 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:32.505 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:33.071 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.071 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:33.071 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.071 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.071 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.071 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.071 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.071 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.329 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.587 00:20:33.587 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.587 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.587 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.845 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.845 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.845 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.845 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.845 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.845 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.845 { 00:20:33.845 "cntlid": 67, 00:20:33.845 "qid": 0, 00:20:33.845 "state": "enabled", 00:20:33.845 "thread": "nvmf_tgt_poll_group_000", 00:20:33.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:33.845 "listen_address": { 00:20:33.845 "trtype": "TCP", 00:20:33.845 "adrfam": "IPv4", 00:20:33.845 "traddr": "10.0.0.2", 00:20:33.845 "trsvcid": "4420" 00:20:33.845 }, 00:20:33.845 "peer_address": { 00:20:33.845 "trtype": "TCP", 00:20:33.845 "adrfam": "IPv4", 00:20:33.845 "traddr": "10.0.0.1", 00:20:33.845 "trsvcid": "43984" 00:20:33.845 }, 00:20:33.845 "auth": { 00:20:33.845 "state": "completed", 00:20:33.845 "digest": "sha384", 00:20:33.845 "dhgroup": "ffdhe3072" 00:20:33.845 } 00:20:33.845 } 00:20:33.845 ]' 00:20:33.845 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.845 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.845 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.845 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:33.845 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.103 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.103 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.103 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.103 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:34.103 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:34.669 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.669 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:34.669 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.669 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.669 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.669 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.669 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.669 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.941 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.199 00:20:35.199 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.199 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.199 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.456 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.456 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.456 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.456 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.456 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.456 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.456 { 00:20:35.456 "cntlid": 69, 00:20:35.456 "qid": 0, 00:20:35.456 "state": "enabled", 00:20:35.456 "thread": "nvmf_tgt_poll_group_000", 00:20:35.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:35.456 "listen_address": { 00:20:35.456 "trtype": "TCP", 00:20:35.456 "adrfam": "IPv4", 00:20:35.456 "traddr": "10.0.0.2", 00:20:35.456 "trsvcid": "4420" 00:20:35.456 }, 00:20:35.456 "peer_address": { 00:20:35.456 "trtype": "TCP", 00:20:35.456 "adrfam": "IPv4", 00:20:35.456 "traddr": "10.0.0.1", 00:20:35.456 "trsvcid": "57232" 00:20:35.456 }, 00:20:35.456 "auth": { 00:20:35.456 "state": "completed", 00:20:35.456 "digest": "sha384", 00:20:35.456 "dhgroup": "ffdhe3072" 00:20:35.456 } 00:20:35.456 } 00:20:35.456 ]' 00:20:35.456 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.456 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.456 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.456 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.456 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.457 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.457 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.457 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.712 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:35.712 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:36.278 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.278 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:36.278 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.278 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.278 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.278 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.278 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.278 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.535 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.792 00:20:36.792 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.792 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.792 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.051 { 00:20:37.051 "cntlid": 71, 00:20:37.051 "qid": 0, 00:20:37.051 "state": "enabled", 00:20:37.051 "thread": "nvmf_tgt_poll_group_000", 00:20:37.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:37.051 "listen_address": { 00:20:37.051 "trtype": "TCP", 00:20:37.051 "adrfam": "IPv4", 00:20:37.051 "traddr": "10.0.0.2", 00:20:37.051 "trsvcid": "4420" 00:20:37.051 }, 00:20:37.051 "peer_address": { 00:20:37.051 "trtype": "TCP", 00:20:37.051 "adrfam": "IPv4", 00:20:37.051 "traddr": "10.0.0.1", 00:20:37.051 "trsvcid": "57246" 00:20:37.051 }, 00:20:37.051 "auth": { 00:20:37.051 "state": "completed", 00:20:37.051 "digest": "sha384", 00:20:37.051 "dhgroup": "ffdhe3072" 00:20:37.051 } 00:20:37.051 } 00:20:37.051 ]' 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.051 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.309 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:37.309 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:37.877 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.877 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:37.877 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.877 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.877 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.877 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.877 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.877 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.877 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.137 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.402 00:20:38.402 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.402 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.402 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.661 { 00:20:38.661 "cntlid": 73, 00:20:38.661 "qid": 0, 00:20:38.661 "state": "enabled", 00:20:38.661 "thread": "nvmf_tgt_poll_group_000", 00:20:38.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:38.661 "listen_address": { 00:20:38.661 "trtype": "TCP", 00:20:38.661 "adrfam": "IPv4", 00:20:38.661 "traddr": "10.0.0.2", 00:20:38.661 "trsvcid": "4420" 00:20:38.661 }, 00:20:38.661 "peer_address": { 00:20:38.661 "trtype": "TCP", 00:20:38.661 "adrfam": "IPv4", 00:20:38.661 "traddr": "10.0.0.1", 00:20:38.661 "trsvcid": "57294" 00:20:38.661 }, 00:20:38.661 "auth": { 00:20:38.661 "state": "completed", 00:20:38.661 "digest": "sha384", 00:20:38.661 "dhgroup": "ffdhe4096" 00:20:38.661 } 00:20:38.661 } 00:20:38.661 ]' 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.661 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.920 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:38.920 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:39.487 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.487 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:39.487 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.487 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.487 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.487 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.487 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.487 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.746 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.005 00:20:40.005 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.005 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.005 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.265 { 00:20:40.265 "cntlid": 75, 00:20:40.265 "qid": 0, 00:20:40.265 "state": "enabled", 00:20:40.265 "thread": "nvmf_tgt_poll_group_000", 00:20:40.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:40.265 "listen_address": { 00:20:40.265 "trtype": "TCP", 00:20:40.265 "adrfam": "IPv4", 00:20:40.265 "traddr": "10.0.0.2", 00:20:40.265 "trsvcid": "4420" 00:20:40.265 }, 00:20:40.265 "peer_address": { 00:20:40.265 "trtype": "TCP", 00:20:40.265 "adrfam": "IPv4", 00:20:40.265 "traddr": "10.0.0.1", 00:20:40.265 "trsvcid": "57322" 00:20:40.265 }, 00:20:40.265 "auth": { 00:20:40.265 "state": "completed", 00:20:40.265 "digest": "sha384", 00:20:40.265 "dhgroup": "ffdhe4096" 00:20:40.265 } 00:20:40.265 } 00:20:40.265 ]' 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.265 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.536 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:40.536 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:41.103 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.103 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:41.103 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.103 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.103 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.103 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.103 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.103 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.361 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.620 00:20:41.620 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.620 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.620 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.879 { 00:20:41.879 "cntlid": 77, 00:20:41.879 "qid": 0, 00:20:41.879 "state": "enabled", 00:20:41.879 "thread": "nvmf_tgt_poll_group_000", 00:20:41.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:41.879 "listen_address": { 00:20:41.879 "trtype": "TCP", 00:20:41.879 "adrfam": "IPv4", 00:20:41.879 "traddr": "10.0.0.2", 00:20:41.879 "trsvcid": "4420" 00:20:41.879 }, 00:20:41.879 "peer_address": { 00:20:41.879 "trtype": "TCP", 00:20:41.879 "adrfam": "IPv4", 00:20:41.879 "traddr": "10.0.0.1", 00:20:41.879 "trsvcid": "57358" 00:20:41.879 }, 00:20:41.879 "auth": { 00:20:41.879 "state": "completed", 00:20:41.879 "digest": "sha384", 00:20:41.879 "dhgroup": "ffdhe4096" 00:20:41.879 } 00:20:41.879 } 00:20:41.879 ]' 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.879 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.137 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:42.137 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:42.702 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.702 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:42.702 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.702 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.702 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.702 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.702 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:42.702 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.967 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.224 00:20:43.224 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.224 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.224 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.481 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.481 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.481 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.481 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.481 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.481 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.481 { 00:20:43.481 "cntlid": 79, 00:20:43.481 "qid": 0, 00:20:43.481 "state": "enabled", 00:20:43.481 "thread": "nvmf_tgt_poll_group_000", 00:20:43.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:43.481 "listen_address": { 00:20:43.481 "trtype": "TCP", 00:20:43.481 "adrfam": "IPv4", 00:20:43.481 "traddr": "10.0.0.2", 00:20:43.481 "trsvcid": "4420" 00:20:43.481 }, 00:20:43.481 "peer_address": { 00:20:43.481 "trtype": "TCP", 00:20:43.481 "adrfam": "IPv4", 00:20:43.481 "traddr": "10.0.0.1", 00:20:43.481 "trsvcid": "57388" 00:20:43.481 }, 00:20:43.481 "auth": { 00:20:43.481 "state": "completed", 00:20:43.481 "digest": "sha384", 00:20:43.481 "dhgroup": "ffdhe4096" 00:20:43.481 } 00:20:43.481 } 00:20:43.481 ]' 00:20:43.481 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.481 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.481 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.481 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:43.481 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.481 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.481 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.481 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.738 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:43.738 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:44.301 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.301 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:44.301 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.301 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.301 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.301 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.301 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.301 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.301 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.558 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.815 00:20:44.815 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.815 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.815 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.073 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.073 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.073 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.073 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.073 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.073 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.073 { 00:20:45.073 "cntlid": 81, 00:20:45.073 "qid": 0, 00:20:45.073 "state": "enabled", 00:20:45.073 "thread": "nvmf_tgt_poll_group_000", 00:20:45.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:45.073 "listen_address": { 00:20:45.073 "trtype": "TCP", 00:20:45.073 "adrfam": "IPv4", 00:20:45.073 "traddr": "10.0.0.2", 00:20:45.073 "trsvcid": "4420" 00:20:45.073 }, 00:20:45.073 "peer_address": { 00:20:45.073 "trtype": "TCP", 00:20:45.073 "adrfam": "IPv4", 00:20:45.073 "traddr": "10.0.0.1", 00:20:45.073 "trsvcid": "57416" 00:20:45.073 }, 00:20:45.073 "auth": { 00:20:45.073 "state": "completed", 00:20:45.073 "digest": "sha384", 00:20:45.074 "dhgroup": "ffdhe6144" 00:20:45.074 } 00:20:45.074 } 00:20:45.074 ]' 00:20:45.074 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.074 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.074 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.074 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.074 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.074 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.074 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.074 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.333 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:45.333 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:45.900 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.900 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:45.900 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.900 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.900 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.900 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.900 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.900 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.159 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.726 00:20:46.726 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.726 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.726 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.726 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.726 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.726 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.726 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.726 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.726 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.726 { 00:20:46.726 "cntlid": 83, 00:20:46.726 "qid": 0, 00:20:46.726 "state": "enabled", 00:20:46.726 "thread": "nvmf_tgt_poll_group_000", 00:20:46.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:46.726 "listen_address": { 00:20:46.726 "trtype": "TCP", 00:20:46.726 "adrfam": "IPv4", 00:20:46.726 "traddr": "10.0.0.2", 00:20:46.726 "trsvcid": "4420" 00:20:46.726 }, 00:20:46.726 "peer_address": { 00:20:46.726 "trtype": "TCP", 00:20:46.726 "adrfam": "IPv4", 00:20:46.726 "traddr": "10.0.0.1", 00:20:46.726 "trsvcid": "59748" 00:20:46.726 }, 00:20:46.726 "auth": { 00:20:46.726 "state": "completed", 00:20:46.726 "digest": "sha384", 00:20:46.726 "dhgroup": "ffdhe6144" 00:20:46.726 } 00:20:46.726 } 00:20:46.726 ]' 00:20:46.726 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.726 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.726 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.983 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.983 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.983 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.983 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.983 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.983 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:46.983 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:47.548 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.806 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.371 00:20:48.371 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.371 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.371 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.371 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.371 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.371 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.371 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.371 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.371 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.371 { 00:20:48.371 "cntlid": 85, 00:20:48.371 "qid": 0, 00:20:48.371 "state": "enabled", 00:20:48.371 "thread": "nvmf_tgt_poll_group_000", 00:20:48.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:48.371 "listen_address": { 00:20:48.371 "trtype": "TCP", 00:20:48.371 "adrfam": "IPv4", 00:20:48.371 "traddr": "10.0.0.2", 00:20:48.371 "trsvcid": "4420" 00:20:48.371 }, 00:20:48.371 "peer_address": { 00:20:48.371 "trtype": "TCP", 00:20:48.371 "adrfam": "IPv4", 00:20:48.371 "traddr": "10.0.0.1", 00:20:48.371 "trsvcid": "59768" 00:20:48.371 }, 00:20:48.371 "auth": { 00:20:48.371 "state": "completed", 00:20:48.371 "digest": "sha384", 00:20:48.371 "dhgroup": "ffdhe6144" 00:20:48.371 } 00:20:48.371 } 00:20:48.371 ]' 00:20:48.371 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.629 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.629 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.629 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.629 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.629 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.629 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.629 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.886 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:48.886 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:49.452 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.452 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:49.452 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.452 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.452 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.452 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.452 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.452 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.452 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.017 00:20:50.017 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.017 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.017 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.017 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.017 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.017 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.017 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.017 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.017 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.017 { 00:20:50.017 "cntlid": 87, 00:20:50.017 "qid": 0, 00:20:50.017 "state": "enabled", 00:20:50.017 "thread": "nvmf_tgt_poll_group_000", 00:20:50.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:50.017 "listen_address": { 00:20:50.017 "trtype": "TCP", 00:20:50.017 "adrfam": "IPv4", 00:20:50.017 "traddr": "10.0.0.2", 00:20:50.017 "trsvcid": "4420" 00:20:50.017 }, 00:20:50.017 "peer_address": { 00:20:50.017 "trtype": "TCP", 00:20:50.017 "adrfam": "IPv4", 00:20:50.017 "traddr": "10.0.0.1", 00:20:50.017 "trsvcid": "59808" 00:20:50.017 }, 00:20:50.017 "auth": { 00:20:50.017 "state": "completed", 00:20:50.017 "digest": "sha384", 00:20:50.017 "dhgroup": "ffdhe6144" 00:20:50.017 } 00:20:50.017 } 00:20:50.017 ]' 00:20:50.017 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.275 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.275 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.275 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.275 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.275 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.275 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.275 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.532 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:50.532 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.098 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.664 00:20:51.664 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.664 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.664 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.922 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.922 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.922 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.922 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.922 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.922 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.922 { 00:20:51.922 "cntlid": 89, 00:20:51.922 "qid": 0, 00:20:51.922 "state": "enabled", 00:20:51.922 "thread": "nvmf_tgt_poll_group_000", 00:20:51.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:51.922 "listen_address": { 00:20:51.922 "trtype": "TCP", 00:20:51.922 "adrfam": "IPv4", 00:20:51.922 "traddr": "10.0.0.2", 00:20:51.922 "trsvcid": "4420" 00:20:51.922 }, 00:20:51.922 "peer_address": { 00:20:51.922 "trtype": "TCP", 00:20:51.922 "adrfam": "IPv4", 00:20:51.922 "traddr": "10.0.0.1", 00:20:51.922 "trsvcid": "59832" 00:20:51.922 }, 00:20:51.922 "auth": { 00:20:51.922 "state": "completed", 00:20:51.922 "digest": "sha384", 00:20:51.922 "dhgroup": "ffdhe8192" 00:20:51.923 } 00:20:51.923 } 00:20:51.923 ]' 00:20:51.923 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.923 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.923 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.923 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.923 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.182 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.182 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.182 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.182 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:52.182 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:52.750 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.750 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:52.750 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.750 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.750 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.750 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.750 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.750 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.008 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.575 00:20:53.575 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.575 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.575 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.832 { 00:20:53.832 "cntlid": 91, 00:20:53.832 "qid": 0, 00:20:53.832 "state": "enabled", 00:20:53.832 "thread": "nvmf_tgt_poll_group_000", 00:20:53.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:53.832 "listen_address": { 00:20:53.832 "trtype": "TCP", 00:20:53.832 "adrfam": "IPv4", 00:20:53.832 "traddr": "10.0.0.2", 00:20:53.832 "trsvcid": "4420" 00:20:53.832 }, 00:20:53.832 "peer_address": { 00:20:53.832 "trtype": "TCP", 00:20:53.832 "adrfam": "IPv4", 00:20:53.832 "traddr": "10.0.0.1", 00:20:53.832 "trsvcid": "59868" 00:20:53.832 }, 00:20:53.832 "auth": { 00:20:53.832 "state": "completed", 00:20:53.832 "digest": "sha384", 00:20:53.832 "dhgroup": "ffdhe8192" 00:20:53.832 } 00:20:53.832 } 00:20:53.832 ]' 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.832 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.095 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:54.095 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:20:54.662 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.662 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:54.662 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.662 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.662 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.662 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.662 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.662 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.921 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.490 00:20:55.490 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.490 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.490 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.490 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.490 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.490 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.490 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.750 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.750 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.750 { 00:20:55.750 "cntlid": 93, 00:20:55.750 "qid": 0, 00:20:55.750 "state": "enabled", 00:20:55.750 "thread": "nvmf_tgt_poll_group_000", 00:20:55.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:55.750 "listen_address": { 00:20:55.750 "trtype": "TCP", 00:20:55.750 "adrfam": "IPv4", 00:20:55.750 "traddr": "10.0.0.2", 00:20:55.750 "trsvcid": "4420" 00:20:55.750 }, 00:20:55.750 "peer_address": { 00:20:55.750 "trtype": "TCP", 00:20:55.750 "adrfam": "IPv4", 00:20:55.750 "traddr": "10.0.0.1", 00:20:55.750 "trsvcid": "59900" 00:20:55.750 }, 00:20:55.750 "auth": { 00:20:55.750 "state": "completed", 00:20:55.750 "digest": "sha384", 00:20:55.750 "dhgroup": "ffdhe8192" 00:20:55.750 } 00:20:55.750 } 00:20:55.750 ]' 00:20:55.750 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.750 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.750 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.750 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.750 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.750 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.750 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.750 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.010 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:56.010 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.576 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.141 00:20:57.141 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.141 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.141 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.400 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.400 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.400 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.400 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.400 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.400 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.400 { 00:20:57.400 "cntlid": 95, 00:20:57.400 "qid": 0, 00:20:57.400 "state": "enabled", 00:20:57.400 "thread": "nvmf_tgt_poll_group_000", 00:20:57.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:57.400 "listen_address": { 00:20:57.400 "trtype": "TCP", 00:20:57.400 "adrfam": "IPv4", 00:20:57.400 "traddr": "10.0.0.2", 00:20:57.400 "trsvcid": "4420" 00:20:57.400 }, 00:20:57.400 "peer_address": { 00:20:57.400 "trtype": "TCP", 00:20:57.400 "adrfam": "IPv4", 00:20:57.400 "traddr": "10.0.0.1", 00:20:57.400 "trsvcid": "38080" 00:20:57.400 }, 00:20:57.400 "auth": { 00:20:57.400 "state": "completed", 00:20:57.400 "digest": "sha384", 00:20:57.400 "dhgroup": "ffdhe8192" 00:20:57.400 } 00:20:57.400 } 00:20:57.400 ]' 00:20:57.400 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.400 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.400 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.400 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.400 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.400 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.400 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.400 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.658 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:57.658 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:20:58.223 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.223 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:58.223 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.223 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.223 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.223 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:58.223 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.223 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.223 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.223 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.482 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.741 00:20:58.741 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.741 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.741 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.000 { 00:20:59.000 "cntlid": 97, 00:20:59.000 "qid": 0, 00:20:59.000 "state": "enabled", 00:20:59.000 "thread": "nvmf_tgt_poll_group_000", 00:20:59.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:59.000 "listen_address": { 00:20:59.000 "trtype": "TCP", 00:20:59.000 "adrfam": "IPv4", 00:20:59.000 "traddr": "10.0.0.2", 00:20:59.000 "trsvcid": "4420" 00:20:59.000 }, 00:20:59.000 "peer_address": { 00:20:59.000 "trtype": "TCP", 00:20:59.000 "adrfam": "IPv4", 00:20:59.000 "traddr": "10.0.0.1", 00:20:59.000 "trsvcid": "38114" 00:20:59.000 }, 00:20:59.000 "auth": { 00:20:59.000 "state": "completed", 00:20:59.000 "digest": "sha512", 00:20:59.000 "dhgroup": "null" 00:20:59.000 } 00:20:59.000 } 00:20:59.000 ]' 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.000 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.260 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:59.260 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:20:59.829 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.829 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:59.829 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.829 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.829 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.829 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.829 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.829 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.089 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.349 00:21:00.349 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.349 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.349 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.349 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.349 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.349 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.349 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.349 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.349 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.349 { 00:21:00.349 "cntlid": 99, 00:21:00.349 "qid": 0, 00:21:00.349 "state": "enabled", 00:21:00.349 "thread": "nvmf_tgt_poll_group_000", 00:21:00.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:00.349 "listen_address": { 00:21:00.349 "trtype": "TCP", 00:21:00.349 "adrfam": "IPv4", 00:21:00.349 "traddr": "10.0.0.2", 00:21:00.349 "trsvcid": "4420" 00:21:00.349 }, 00:21:00.349 "peer_address": { 00:21:00.349 "trtype": "TCP", 00:21:00.349 "adrfam": "IPv4", 00:21:00.349 "traddr": "10.0.0.1", 00:21:00.349 "trsvcid": "38138" 00:21:00.349 }, 00:21:00.349 "auth": { 00:21:00.349 "state": "completed", 00:21:00.349 "digest": "sha512", 00:21:00.349 "dhgroup": "null" 00:21:00.349 } 00:21:00.349 } 00:21:00.349 ]' 00:21:00.349 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.609 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.609 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.609 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:00.609 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.609 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.609 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.609 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.868 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:21:00.868 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:21:01.436 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.436 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:01.436 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.436 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.436 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.436 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.436 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.436 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.694 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.953 00:21:01.953 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.953 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.953 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.953 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.953 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.953 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.953 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.953 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.953 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.953 { 00:21:01.953 "cntlid": 101, 00:21:01.953 "qid": 0, 00:21:01.953 "state": "enabled", 00:21:01.953 "thread": "nvmf_tgt_poll_group_000", 00:21:01.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:01.953 "listen_address": { 00:21:01.953 "trtype": "TCP", 00:21:01.953 "adrfam": "IPv4", 00:21:01.953 "traddr": "10.0.0.2", 00:21:01.953 "trsvcid": "4420" 00:21:01.953 }, 00:21:01.953 "peer_address": { 00:21:01.953 "trtype": "TCP", 00:21:01.953 "adrfam": "IPv4", 00:21:01.953 "traddr": "10.0.0.1", 00:21:01.953 "trsvcid": "38160" 00:21:01.953 }, 00:21:01.953 "auth": { 00:21:01.953 "state": "completed", 00:21:01.953 "digest": "sha512", 00:21:01.953 "dhgroup": "null" 00:21:01.953 } 00:21:01.953 } 00:21:01.953 ]' 00:21:01.953 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.211 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.211 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.211 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:02.211 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.211 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.211 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.211 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.470 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:21:02.470 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:21:03.036 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.036 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:03.036 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.036 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.037 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.293 00:21:03.294 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.294 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.294 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.551 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.551 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.551 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.551 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.551 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.551 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.551 { 00:21:03.551 "cntlid": 103, 00:21:03.551 "qid": 0, 00:21:03.551 "state": "enabled", 00:21:03.551 "thread": "nvmf_tgt_poll_group_000", 00:21:03.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:03.551 "listen_address": { 00:21:03.551 "trtype": "TCP", 00:21:03.551 "adrfam": "IPv4", 00:21:03.551 "traddr": "10.0.0.2", 00:21:03.551 "trsvcid": "4420" 00:21:03.551 }, 00:21:03.551 "peer_address": { 00:21:03.551 "trtype": "TCP", 00:21:03.551 "adrfam": "IPv4", 00:21:03.552 "traddr": "10.0.0.1", 00:21:03.552 "trsvcid": "38194" 00:21:03.552 }, 00:21:03.552 "auth": { 00:21:03.552 "state": "completed", 00:21:03.552 "digest": "sha512", 00:21:03.552 "dhgroup": "null" 00:21:03.552 } 00:21:03.552 } 00:21:03.552 ]' 00:21:03.552 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.552 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.552 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.809 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:03.809 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.809 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.809 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.809 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.065 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:04.065 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.629 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.887 00:21:04.887 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.887 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.887 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.148 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.148 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.148 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.148 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.148 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.148 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.148 { 00:21:05.148 "cntlid": 105, 00:21:05.148 "qid": 0, 00:21:05.148 "state": "enabled", 00:21:05.148 "thread": "nvmf_tgt_poll_group_000", 00:21:05.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:05.148 "listen_address": { 00:21:05.148 "trtype": "TCP", 00:21:05.148 "adrfam": "IPv4", 00:21:05.148 "traddr": "10.0.0.2", 00:21:05.148 "trsvcid": "4420" 00:21:05.148 }, 00:21:05.148 "peer_address": { 00:21:05.148 "trtype": "TCP", 00:21:05.148 "adrfam": "IPv4", 00:21:05.148 "traddr": "10.0.0.1", 00:21:05.148 "trsvcid": "38206" 00:21:05.148 }, 00:21:05.148 "auth": { 00:21:05.148 "state": "completed", 00:21:05.148 "digest": "sha512", 00:21:05.148 "dhgroup": "ffdhe2048" 00:21:05.148 } 00:21:05.148 } 00:21:05.148 ]' 00:21:05.148 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.148 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.148 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.148 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.148 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.409 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.409 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.409 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.409 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:21:05.409 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:21:05.979 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.979 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:05.979 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.979 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.979 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.979 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.979 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.979 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.240 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:06.240 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.240 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.240 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:06.240 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.240 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.240 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.240 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.240 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.240 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.241 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.241 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.241 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.501 00:21:06.501 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.501 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.501 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.819 { 00:21:06.819 "cntlid": 107, 00:21:06.819 "qid": 0, 00:21:06.819 "state": "enabled", 00:21:06.819 "thread": "nvmf_tgt_poll_group_000", 00:21:06.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:06.819 "listen_address": { 00:21:06.819 "trtype": "TCP", 00:21:06.819 "adrfam": "IPv4", 00:21:06.819 "traddr": "10.0.0.2", 00:21:06.819 "trsvcid": "4420" 00:21:06.819 }, 00:21:06.819 "peer_address": { 00:21:06.819 "trtype": "TCP", 00:21:06.819 "adrfam": "IPv4", 00:21:06.819 "traddr": "10.0.0.1", 00:21:06.819 "trsvcid": "56278" 00:21:06.819 }, 00:21:06.819 "auth": { 00:21:06.819 "state": "completed", 00:21:06.819 "digest": "sha512", 00:21:06.819 "dhgroup": "ffdhe2048" 00:21:06.819 } 00:21:06.819 } 00:21:06.819 ]' 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.819 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.146 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:21:07.146 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:21:07.481 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.481 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:07.481 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.481 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.762 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.029 00:21:08.029 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.029 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.029 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.287 { 00:21:08.287 "cntlid": 109, 00:21:08.287 "qid": 0, 00:21:08.287 "state": "enabled", 00:21:08.287 "thread": "nvmf_tgt_poll_group_000", 00:21:08.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:08.287 "listen_address": { 00:21:08.287 "trtype": "TCP", 00:21:08.287 "adrfam": "IPv4", 00:21:08.287 "traddr": "10.0.0.2", 00:21:08.287 "trsvcid": "4420" 00:21:08.287 }, 00:21:08.287 "peer_address": { 00:21:08.287 "trtype": "TCP", 00:21:08.287 "adrfam": "IPv4", 00:21:08.287 "traddr": "10.0.0.1", 00:21:08.287 "trsvcid": "56308" 00:21:08.287 }, 00:21:08.287 "auth": { 00:21:08.287 "state": "completed", 00:21:08.287 "digest": "sha512", 00:21:08.287 "dhgroup": "ffdhe2048" 00:21:08.287 } 00:21:08.287 } 00:21:08.287 ]' 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.287 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.546 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:21:08.546 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:21:09.113 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.113 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:09.113 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.113 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.113 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.113 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.113 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.113 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.373 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.632 00:21:09.632 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.632 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.632 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.891 { 00:21:09.891 "cntlid": 111, 00:21:09.891 "qid": 0, 00:21:09.891 "state": "enabled", 00:21:09.891 "thread": "nvmf_tgt_poll_group_000", 00:21:09.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:09.891 "listen_address": { 00:21:09.891 "trtype": "TCP", 00:21:09.891 "adrfam": "IPv4", 00:21:09.891 "traddr": "10.0.0.2", 00:21:09.891 "trsvcid": "4420" 00:21:09.891 }, 00:21:09.891 "peer_address": { 00:21:09.891 "trtype": "TCP", 00:21:09.891 "adrfam": "IPv4", 00:21:09.891 "traddr": "10.0.0.1", 00:21:09.891 "trsvcid": "56348" 00:21:09.891 }, 00:21:09.891 "auth": { 00:21:09.891 "state": "completed", 00:21:09.891 "digest": "sha512", 00:21:09.891 "dhgroup": "ffdhe2048" 00:21:09.891 } 00:21:09.891 } 00:21:09.891 ]' 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.891 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.150 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:10.150 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:10.718 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.718 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:10.718 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.718 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.718 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.718 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.718 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.718 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.718 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.977 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:10.977 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.978 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.978 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:10.978 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:10.978 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.978 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.978 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.978 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.978 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.978 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.978 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.978 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.236 00:21:11.236 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.236 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.236 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.494 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.494 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.494 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.494 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.494 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.494 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.494 { 00:21:11.494 "cntlid": 113, 00:21:11.494 "qid": 0, 00:21:11.494 "state": "enabled", 00:21:11.494 "thread": "nvmf_tgt_poll_group_000", 00:21:11.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:11.494 "listen_address": { 00:21:11.494 "trtype": "TCP", 00:21:11.494 "adrfam": "IPv4", 00:21:11.494 "traddr": "10.0.0.2", 00:21:11.494 "trsvcid": "4420" 00:21:11.494 }, 00:21:11.494 "peer_address": { 00:21:11.494 "trtype": "TCP", 00:21:11.494 "adrfam": "IPv4", 00:21:11.494 "traddr": "10.0.0.1", 00:21:11.494 "trsvcid": "56376" 00:21:11.494 }, 00:21:11.494 "auth": { 00:21:11.494 "state": "completed", 00:21:11.494 "digest": "sha512", 00:21:11.494 "dhgroup": "ffdhe3072" 00:21:11.494 } 00:21:11.494 } 00:21:11.494 ]' 00:21:11.494 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.494 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.494 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.494 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.494 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.494 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.494 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.494 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.752 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:21:11.752 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:21:12.318 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.318 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:12.318 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.318 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.318 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.318 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.318 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.318 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.576 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.834 00:21:12.834 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.834 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.834 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.092 { 00:21:13.092 "cntlid": 115, 00:21:13.092 "qid": 0, 00:21:13.092 "state": "enabled", 00:21:13.092 "thread": "nvmf_tgt_poll_group_000", 00:21:13.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:13.092 "listen_address": { 00:21:13.092 "trtype": "TCP", 00:21:13.092 "adrfam": "IPv4", 00:21:13.092 "traddr": "10.0.0.2", 00:21:13.092 "trsvcid": "4420" 00:21:13.092 }, 00:21:13.092 "peer_address": { 00:21:13.092 "trtype": "TCP", 00:21:13.092 "adrfam": "IPv4", 00:21:13.092 "traddr": "10.0.0.1", 00:21:13.092 "trsvcid": "56400" 00:21:13.092 }, 00:21:13.092 "auth": { 00:21:13.092 "state": "completed", 00:21:13.092 "digest": "sha512", 00:21:13.092 "dhgroup": "ffdhe3072" 00:21:13.092 } 00:21:13.092 } 00:21:13.092 ]' 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.092 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.351 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:21:13.351 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:21:13.917 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.917 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:13.917 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.917 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.917 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.917 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.917 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.917 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.176 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.434 00:21:14.434 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.434 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.434 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.694 { 00:21:14.694 "cntlid": 117, 00:21:14.694 "qid": 0, 00:21:14.694 "state": "enabled", 00:21:14.694 "thread": "nvmf_tgt_poll_group_000", 00:21:14.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:14.694 "listen_address": { 00:21:14.694 "trtype": "TCP", 00:21:14.694 "adrfam": "IPv4", 00:21:14.694 "traddr": "10.0.0.2", 00:21:14.694 "trsvcid": "4420" 00:21:14.694 }, 00:21:14.694 "peer_address": { 00:21:14.694 "trtype": "TCP", 00:21:14.694 "adrfam": "IPv4", 00:21:14.694 "traddr": "10.0.0.1", 00:21:14.694 "trsvcid": "56428" 00:21:14.694 }, 00:21:14.694 "auth": { 00:21:14.694 "state": "completed", 00:21:14.694 "digest": "sha512", 00:21:14.694 "dhgroup": "ffdhe3072" 00:21:14.694 } 00:21:14.694 } 00:21:14.694 ]' 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.694 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.952 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:21:14.953 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:21:15.517 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.517 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:15.517 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.517 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.517 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.517 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.517 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.517 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.774 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.032 00:21:16.032 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.032 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.032 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.291 { 00:21:16.291 "cntlid": 119, 00:21:16.291 "qid": 0, 00:21:16.291 "state": "enabled", 00:21:16.291 "thread": "nvmf_tgt_poll_group_000", 00:21:16.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:16.291 "listen_address": { 00:21:16.291 "trtype": "TCP", 00:21:16.291 "adrfam": "IPv4", 00:21:16.291 "traddr": "10.0.0.2", 00:21:16.291 "trsvcid": "4420" 00:21:16.291 }, 00:21:16.291 "peer_address": { 00:21:16.291 "trtype": "TCP", 00:21:16.291 "adrfam": "IPv4", 00:21:16.291 "traddr": "10.0.0.1", 00:21:16.291 "trsvcid": "33098" 00:21:16.291 }, 00:21:16.291 "auth": { 00:21:16.291 "state": "completed", 00:21:16.291 "digest": "sha512", 00:21:16.291 "dhgroup": "ffdhe3072" 00:21:16.291 } 00:21:16.291 } 00:21:16.291 ]' 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.291 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.551 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:16.551 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:17.117 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.117 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:17.117 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.117 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.117 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.117 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.117 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.117 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.117 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.375 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.633 00:21:17.633 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.633 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.633 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.891 { 00:21:17.891 "cntlid": 121, 00:21:17.891 "qid": 0, 00:21:17.891 "state": "enabled", 00:21:17.891 "thread": "nvmf_tgt_poll_group_000", 00:21:17.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:17.891 "listen_address": { 00:21:17.891 "trtype": "TCP", 00:21:17.891 "adrfam": "IPv4", 00:21:17.891 "traddr": "10.0.0.2", 00:21:17.891 "trsvcid": "4420" 00:21:17.891 }, 00:21:17.891 "peer_address": { 00:21:17.891 "trtype": "TCP", 00:21:17.891 "adrfam": "IPv4", 00:21:17.891 "traddr": "10.0.0.1", 00:21:17.891 "trsvcid": "33118" 00:21:17.891 }, 00:21:17.891 "auth": { 00:21:17.891 "state": "completed", 00:21:17.891 "digest": "sha512", 00:21:17.891 "dhgroup": "ffdhe4096" 00:21:17.891 } 00:21:17.891 } 00:21:17.891 ]' 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.891 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.149 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:21:18.149 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:21:18.723 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.723 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.723 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.723 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.723 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.723 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.723 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.723 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.980 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.239 00:21:19.239 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.239 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.239 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.497 { 00:21:19.497 "cntlid": 123, 00:21:19.497 "qid": 0, 00:21:19.497 "state": "enabled", 00:21:19.497 "thread": "nvmf_tgt_poll_group_000", 00:21:19.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:19.497 "listen_address": { 00:21:19.497 "trtype": "TCP", 00:21:19.497 "adrfam": "IPv4", 00:21:19.497 "traddr": "10.0.0.2", 00:21:19.497 "trsvcid": "4420" 00:21:19.497 }, 00:21:19.497 "peer_address": { 00:21:19.497 "trtype": "TCP", 00:21:19.497 "adrfam": "IPv4", 00:21:19.497 "traddr": "10.0.0.1", 00:21:19.497 "trsvcid": "33136" 00:21:19.497 }, 00:21:19.497 "auth": { 00:21:19.497 "state": "completed", 00:21:19.497 "digest": "sha512", 00:21:19.497 "dhgroup": "ffdhe4096" 00:21:19.497 } 00:21:19.497 } 00:21:19.497 ]' 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.497 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.756 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:21:19.756 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:21:20.320 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.320 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:20.320 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.320 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.320 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.320 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.320 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.320 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.576 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.833 00:21:20.833 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.833 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.833 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.090 { 00:21:21.090 "cntlid": 125, 00:21:21.090 "qid": 0, 00:21:21.090 "state": "enabled", 00:21:21.090 "thread": "nvmf_tgt_poll_group_000", 00:21:21.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:21.090 "listen_address": { 00:21:21.090 "trtype": "TCP", 00:21:21.090 "adrfam": "IPv4", 00:21:21.090 "traddr": "10.0.0.2", 00:21:21.090 "trsvcid": "4420" 00:21:21.090 }, 00:21:21.090 "peer_address": { 00:21:21.090 "trtype": "TCP", 00:21:21.090 "adrfam": "IPv4", 00:21:21.090 "traddr": "10.0.0.1", 00:21:21.090 "trsvcid": "33160" 00:21:21.090 }, 00:21:21.090 "auth": { 00:21:21.090 "state": "completed", 00:21:21.090 "digest": "sha512", 00:21:21.090 "dhgroup": "ffdhe4096" 00:21:21.090 } 00:21:21.090 } 00:21:21.090 ]' 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.090 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.347 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:21:21.347 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:21:21.911 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.911 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.911 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.911 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.911 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.911 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.911 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.911 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.169 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.424 00:21:22.424 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.424 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.424 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.682 { 00:21:22.682 "cntlid": 127, 00:21:22.682 "qid": 0, 00:21:22.682 "state": "enabled", 00:21:22.682 "thread": "nvmf_tgt_poll_group_000", 00:21:22.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:22.682 "listen_address": { 00:21:22.682 "trtype": "TCP", 00:21:22.682 "adrfam": "IPv4", 00:21:22.682 "traddr": "10.0.0.2", 00:21:22.682 "trsvcid": "4420" 00:21:22.682 }, 00:21:22.682 "peer_address": { 00:21:22.682 "trtype": "TCP", 00:21:22.682 "adrfam": "IPv4", 00:21:22.682 "traddr": "10.0.0.1", 00:21:22.682 "trsvcid": "33186" 00:21:22.682 }, 00:21:22.682 "auth": { 00:21:22.682 "state": "completed", 00:21:22.682 "digest": "sha512", 00:21:22.682 "dhgroup": "ffdhe4096" 00:21:22.682 } 00:21:22.682 } 00:21:22.682 ]' 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.682 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.939 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:22.939 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:23.501 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.501 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:23.501 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.501 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.501 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.501 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.501 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.501 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.501 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.759 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.016 00:21:24.016 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.016 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.016 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.274 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.274 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.274 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.274 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.274 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.274 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.274 { 00:21:24.274 "cntlid": 129, 00:21:24.274 "qid": 0, 00:21:24.274 "state": "enabled", 00:21:24.274 "thread": "nvmf_tgt_poll_group_000", 00:21:24.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:24.274 "listen_address": { 00:21:24.274 "trtype": "TCP", 00:21:24.274 "adrfam": "IPv4", 00:21:24.274 "traddr": "10.0.0.2", 00:21:24.274 "trsvcid": "4420" 00:21:24.274 }, 00:21:24.274 "peer_address": { 00:21:24.274 "trtype": "TCP", 00:21:24.274 "adrfam": "IPv4", 00:21:24.274 "traddr": "10.0.0.1", 00:21:24.274 "trsvcid": "33214" 00:21:24.274 }, 00:21:24.274 "auth": { 00:21:24.274 "state": "completed", 00:21:24.274 "digest": "sha512", 00:21:24.274 "dhgroup": "ffdhe6144" 00:21:24.274 } 00:21:24.274 } 00:21:24.274 ]' 00:21:24.274 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.274 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.274 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.274 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.274 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.532 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.532 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.532 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.532 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:21:24.532 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:21:25.099 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.358 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:25.358 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.358 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.358 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.358 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.358 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.358 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.358 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.925 00:21:25.925 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.925 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.925 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.925 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.925 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.925 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.925 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.925 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.925 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.925 { 00:21:25.925 "cntlid": 131, 00:21:25.925 "qid": 0, 00:21:25.925 "state": "enabled", 00:21:25.925 "thread": "nvmf_tgt_poll_group_000", 00:21:25.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:25.925 "listen_address": { 00:21:25.925 "trtype": "TCP", 00:21:25.925 "adrfam": "IPv4", 00:21:25.925 "traddr": "10.0.0.2", 00:21:25.925 "trsvcid": "4420" 00:21:25.925 }, 00:21:25.925 "peer_address": { 00:21:25.925 "trtype": "TCP", 00:21:25.925 "adrfam": "IPv4", 00:21:25.925 "traddr": "10.0.0.1", 00:21:25.925 "trsvcid": "39254" 00:21:25.925 }, 00:21:25.925 "auth": { 00:21:25.925 "state": "completed", 00:21:25.925 "digest": "sha512", 00:21:25.925 "dhgroup": "ffdhe6144" 00:21:25.925 } 00:21:25.925 } 00:21:25.925 ]' 00:21:25.925 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.925 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.925 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.183 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.183 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.183 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.183 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.183 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.441 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:21:26.441 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.005 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.572 00:21:27.572 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.572 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.572 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.572 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.572 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.572 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.572 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.572 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.572 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.572 { 00:21:27.572 "cntlid": 133, 00:21:27.572 "qid": 0, 00:21:27.572 "state": "enabled", 00:21:27.572 "thread": "nvmf_tgt_poll_group_000", 00:21:27.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:27.572 "listen_address": { 00:21:27.572 "trtype": "TCP", 00:21:27.572 "adrfam": "IPv4", 00:21:27.572 "traddr": "10.0.0.2", 00:21:27.572 "trsvcid": "4420" 00:21:27.572 }, 00:21:27.572 "peer_address": { 00:21:27.572 "trtype": "TCP", 00:21:27.572 "adrfam": "IPv4", 00:21:27.572 "traddr": "10.0.0.1", 00:21:27.572 "trsvcid": "39272" 00:21:27.572 }, 00:21:27.572 "auth": { 00:21:27.572 "state": "completed", 00:21:27.572 "digest": "sha512", 00:21:27.572 "dhgroup": "ffdhe6144" 00:21:27.572 } 00:21:27.572 } 00:21:27.572 ]' 00:21:27.572 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.572 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.572 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.830 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.830 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.830 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.830 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.830 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.089 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:21:28.089 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.656 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.224 00:21:29.224 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.224 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.224 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.224 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.224 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.224 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.224 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.224 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.224 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.224 { 00:21:29.224 "cntlid": 135, 00:21:29.224 "qid": 0, 00:21:29.224 "state": "enabled", 00:21:29.224 "thread": "nvmf_tgt_poll_group_000", 00:21:29.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:29.224 "listen_address": { 00:21:29.224 "trtype": "TCP", 00:21:29.224 "adrfam": "IPv4", 00:21:29.224 "traddr": "10.0.0.2", 00:21:29.224 "trsvcid": "4420" 00:21:29.224 }, 00:21:29.224 "peer_address": { 00:21:29.224 "trtype": "TCP", 00:21:29.224 "adrfam": "IPv4", 00:21:29.224 "traddr": "10.0.0.1", 00:21:29.224 "trsvcid": "39302" 00:21:29.224 }, 00:21:29.224 "auth": { 00:21:29.224 "state": "completed", 00:21:29.224 "digest": "sha512", 00:21:29.224 "dhgroup": "ffdhe6144" 00:21:29.224 } 00:21:29.224 } 00:21:29.224 ]' 00:21:29.224 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.483 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.483 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.483 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.483 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.483 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.483 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.483 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.741 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:29.741 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.309 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.309 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.309 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.309 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.309 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.878 00:21:30.878 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.878 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.878 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.138 { 00:21:31.138 "cntlid": 137, 00:21:31.138 "qid": 0, 00:21:31.138 "state": "enabled", 00:21:31.138 "thread": "nvmf_tgt_poll_group_000", 00:21:31.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:31.138 "listen_address": { 00:21:31.138 "trtype": "TCP", 00:21:31.138 "adrfam": "IPv4", 00:21:31.138 "traddr": "10.0.0.2", 00:21:31.138 "trsvcid": "4420" 00:21:31.138 }, 00:21:31.138 "peer_address": { 00:21:31.138 "trtype": "TCP", 00:21:31.138 "adrfam": "IPv4", 00:21:31.138 "traddr": "10.0.0.1", 00:21:31.138 "trsvcid": "39328" 00:21:31.138 }, 00:21:31.138 "auth": { 00:21:31.138 "state": "completed", 00:21:31.138 "digest": "sha512", 00:21:31.138 "dhgroup": "ffdhe8192" 00:21:31.138 } 00:21:31.138 } 00:21:31.138 ]' 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.138 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.397 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:21:31.397 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:21:31.965 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.965 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.965 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.965 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.965 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.965 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.965 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.965 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.223 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.789 00:21:32.789 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.789 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.789 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.789 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.789 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.789 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.789 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.789 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.789 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.789 { 00:21:32.789 "cntlid": 139, 00:21:32.789 "qid": 0, 00:21:32.789 "state": "enabled", 00:21:32.789 "thread": "nvmf_tgt_poll_group_000", 00:21:32.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:32.789 "listen_address": { 00:21:32.789 "trtype": "TCP", 00:21:32.789 "adrfam": "IPv4", 00:21:32.789 "traddr": "10.0.0.2", 00:21:32.789 "trsvcid": "4420" 00:21:32.789 }, 00:21:32.789 "peer_address": { 00:21:32.789 "trtype": "TCP", 00:21:32.789 "adrfam": "IPv4", 00:21:32.789 "traddr": "10.0.0.1", 00:21:32.789 "trsvcid": "39354" 00:21:32.789 }, 00:21:32.789 "auth": { 00:21:32.789 "state": "completed", 00:21:32.789 "digest": "sha512", 00:21:32.789 "dhgroup": "ffdhe8192" 00:21:32.789 } 00:21:32.789 } 00:21:32.789 ]' 00:21:33.046 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.047 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.047 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.047 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.047 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.047 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.047 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.047 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.304 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:21:33.304 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: --dhchap-ctrl-secret DHHC-1:02:Mjk5ZDY5NGI1NWI5NmVjYTk4ZTc0NWMxMTlmZTkzM2M0YTUwOWY1MDdkMjUwOWFmuPE94g==: 00:21:33.870 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.870 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.870 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.870 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.870 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.870 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.870 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.870 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.128 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.386 00:21:34.386 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.386 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.386 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.645 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.645 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.645 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.645 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.645 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.645 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.645 { 00:21:34.645 "cntlid": 141, 00:21:34.645 "qid": 0, 00:21:34.645 "state": "enabled", 00:21:34.645 "thread": "nvmf_tgt_poll_group_000", 00:21:34.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:34.645 "listen_address": { 00:21:34.645 "trtype": "TCP", 00:21:34.645 "adrfam": "IPv4", 00:21:34.645 "traddr": "10.0.0.2", 00:21:34.645 "trsvcid": "4420" 00:21:34.645 }, 00:21:34.645 "peer_address": { 00:21:34.645 "trtype": "TCP", 00:21:34.645 "adrfam": "IPv4", 00:21:34.645 "traddr": "10.0.0.1", 00:21:34.645 "trsvcid": "39382" 00:21:34.645 }, 00:21:34.645 "auth": { 00:21:34.645 "state": "completed", 00:21:34.645 "digest": "sha512", 00:21:34.645 "dhgroup": "ffdhe8192" 00:21:34.645 } 00:21:34.645 } 00:21:34.645 ]' 00:21:34.645 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.645 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.645 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.903 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.903 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.903 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.903 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.903 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.160 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:21:35.160 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:01:MjlkMjQyMzViZDRjMDRlNDA0OTFiYzBlMzljM2M1ZmPLjL7Y: 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.726 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.293 00:21:36.293 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.293 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.293 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.552 { 00:21:36.552 "cntlid": 143, 00:21:36.552 "qid": 0, 00:21:36.552 "state": "enabled", 00:21:36.552 "thread": "nvmf_tgt_poll_group_000", 00:21:36.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:36.552 "listen_address": { 00:21:36.552 "trtype": "TCP", 00:21:36.552 "adrfam": "IPv4", 00:21:36.552 "traddr": "10.0.0.2", 00:21:36.552 "trsvcid": "4420" 00:21:36.552 }, 00:21:36.552 "peer_address": { 00:21:36.552 "trtype": "TCP", 00:21:36.552 "adrfam": "IPv4", 00:21:36.552 "traddr": "10.0.0.1", 00:21:36.552 "trsvcid": "49734" 00:21:36.552 }, 00:21:36.552 "auth": { 00:21:36.552 "state": "completed", 00:21:36.552 "digest": "sha512", 00:21:36.552 "dhgroup": "ffdhe8192" 00:21:36.552 } 00:21:36.552 } 00:21:36.552 ]' 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.552 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.812 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:36.812 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:37.377 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.377 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:37.377 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.377 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.377 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.377 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:37.377 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:37.377 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:37.377 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.377 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.377 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.635 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.202 00:21:38.202 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.202 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.202 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.202 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.202 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.202 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.202 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.202 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.202 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.202 { 00:21:38.202 "cntlid": 145, 00:21:38.202 "qid": 0, 00:21:38.202 "state": "enabled", 00:21:38.202 "thread": "nvmf_tgt_poll_group_000", 00:21:38.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:38.202 "listen_address": { 00:21:38.202 "trtype": "TCP", 00:21:38.202 "adrfam": "IPv4", 00:21:38.202 "traddr": "10.0.0.2", 00:21:38.202 "trsvcid": "4420" 00:21:38.202 }, 00:21:38.202 "peer_address": { 00:21:38.202 "trtype": "TCP", 00:21:38.202 "adrfam": "IPv4", 00:21:38.202 "traddr": "10.0.0.1", 00:21:38.202 "trsvcid": "49768" 00:21:38.202 }, 00:21:38.202 "auth": { 00:21:38.202 "state": "completed", 00:21:38.202 "digest": "sha512", 00:21:38.202 "dhgroup": "ffdhe8192" 00:21:38.202 } 00:21:38.202 } 00:21:38.202 ]' 00:21:38.202 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.460 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.460 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.460 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.460 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.460 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.460 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.460 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.718 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:21:38.718 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmExMTA3ZTJlZGMzODgyMDZiYjY0N2I5ZWM5NWNjNGM2MWQzYWZhMmY3YzFlZTI540ms5Q==: --dhchap-ctrl-secret DHHC-1:03:MTk0MGE3ZmE0NjgzMThjZTUwNzA4NWE3ZTJkMDFhMDE1MmU2Y2U5MjFlODVkZWMyMjFjOGExMzM1YzNmZGRhONklDaY=: 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:39.285 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:39.545 request: 00:21:39.545 { 00:21:39.545 "name": "nvme0", 00:21:39.545 "trtype": "tcp", 00:21:39.545 "traddr": "10.0.0.2", 00:21:39.545 "adrfam": "ipv4", 00:21:39.545 "trsvcid": "4420", 00:21:39.545 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:39.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:39.545 "prchk_reftag": false, 00:21:39.545 "prchk_guard": false, 00:21:39.545 "hdgst": false, 00:21:39.545 "ddgst": false, 00:21:39.545 "dhchap_key": "key2", 00:21:39.545 "allow_unrecognized_csi": false, 00:21:39.545 "method": "bdev_nvme_attach_controller", 00:21:39.545 "req_id": 1 00:21:39.545 } 00:21:39.545 Got JSON-RPC error response 00:21:39.545 response: 00:21:39.545 { 00:21:39.545 "code": -5, 00:21:39.545 "message": "Input/output error" 00:21:39.545 } 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:39.545 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.112 request: 00:21:40.112 { 00:21:40.112 "name": "nvme0", 00:21:40.112 "trtype": "tcp", 00:21:40.112 "traddr": "10.0.0.2", 00:21:40.112 "adrfam": "ipv4", 00:21:40.112 "trsvcid": "4420", 00:21:40.112 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:40.112 "prchk_reftag": false, 00:21:40.112 "prchk_guard": false, 00:21:40.112 "hdgst": false, 00:21:40.112 "ddgst": false, 00:21:40.112 "dhchap_key": "key1", 00:21:40.112 "dhchap_ctrlr_key": "ckey2", 00:21:40.112 "allow_unrecognized_csi": false, 00:21:40.112 "method": "bdev_nvme_attach_controller", 00:21:40.112 "req_id": 1 00:21:40.112 } 00:21:40.112 Got JSON-RPC error response 00:21:40.112 response: 00:21:40.112 { 00:21:40.112 "code": -5, 00:21:40.112 "message": "Input/output error" 00:21:40.112 } 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.112 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.680 request: 00:21:40.680 { 00:21:40.680 "name": "nvme0", 00:21:40.680 "trtype": "tcp", 00:21:40.680 "traddr": "10.0.0.2", 00:21:40.680 "adrfam": "ipv4", 00:21:40.680 "trsvcid": "4420", 00:21:40.680 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:40.680 "prchk_reftag": false, 00:21:40.680 "prchk_guard": false, 00:21:40.680 "hdgst": false, 00:21:40.680 "ddgst": false, 00:21:40.680 "dhchap_key": "key1", 00:21:40.680 "dhchap_ctrlr_key": "ckey1", 00:21:40.680 "allow_unrecognized_csi": false, 00:21:40.680 "method": "bdev_nvme_attach_controller", 00:21:40.680 "req_id": 1 00:21:40.680 } 00:21:40.680 Got JSON-RPC error response 00:21:40.680 response: 00:21:40.680 { 00:21:40.680 "code": -5, 00:21:40.680 "message": "Input/output error" 00:21:40.680 } 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1248443 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1248443 ']' 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1248443 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1248443 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1248443' 00:21:40.680 killing process with pid 1248443 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1248443 00:21:40.680 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1248443 00:21:40.938 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=1270242 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 1270242 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1270242 ']' 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1270242 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1270242 ']' 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.939 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.197 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.197 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:41.197 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:41.197 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.197 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 null0 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.B43 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.0P1 ]] 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0P1 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ImE 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.PjO ]] 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PjO 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xlN 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.455 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.sbU ]] 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sbU 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EPQ 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.455 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.388 nvme0n1 00:21:42.388 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.388 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.388 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.388 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.388 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.388 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.388 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.388 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.388 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.388 { 00:21:42.388 "cntlid": 1, 00:21:42.388 "qid": 0, 00:21:42.388 "state": "enabled", 00:21:42.388 "thread": "nvmf_tgt_poll_group_000", 00:21:42.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:42.388 "listen_address": { 00:21:42.388 "trtype": "TCP", 00:21:42.388 "adrfam": "IPv4", 00:21:42.388 "traddr": "10.0.0.2", 00:21:42.388 "trsvcid": "4420" 00:21:42.388 }, 00:21:42.388 "peer_address": { 00:21:42.388 "trtype": "TCP", 00:21:42.388 "adrfam": "IPv4", 00:21:42.388 "traddr": "10.0.0.1", 00:21:42.388 "trsvcid": "49814" 00:21:42.388 }, 00:21:42.388 "auth": { 00:21:42.388 "state": "completed", 00:21:42.388 "digest": "sha512", 00:21:42.388 "dhgroup": "ffdhe8192" 00:21:42.388 } 00:21:42.388 } 00:21:42.388 ]' 00:21:42.388 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.388 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.388 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.388 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.388 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.647 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.647 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.647 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.647 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:42.647 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:43.212 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.212 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:43.212 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.212 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.212 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.212 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:43.212 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.212 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.212 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.212 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:43.212 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:43.468 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:43.468 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:43.468 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:43.468 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:43.468 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.468 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:43.468 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.468 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:43.468 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.468 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.725 request: 00:21:43.725 { 00:21:43.725 "name": "nvme0", 00:21:43.725 "trtype": "tcp", 00:21:43.725 "traddr": "10.0.0.2", 00:21:43.725 "adrfam": "ipv4", 00:21:43.725 "trsvcid": "4420", 00:21:43.725 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:43.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:43.725 "prchk_reftag": false, 00:21:43.725 "prchk_guard": false, 00:21:43.725 "hdgst": false, 00:21:43.725 "ddgst": false, 00:21:43.725 "dhchap_key": "key3", 00:21:43.725 "allow_unrecognized_csi": false, 00:21:43.725 "method": "bdev_nvme_attach_controller", 00:21:43.725 "req_id": 1 00:21:43.725 } 00:21:43.725 Got JSON-RPC error response 00:21:43.725 response: 00:21:43.725 { 00:21:43.725 "code": -5, 00:21:43.725 "message": "Input/output error" 00:21:43.725 } 00:21:43.725 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:43.725 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:43.725 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:43.725 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:43.725 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:43.725 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:43.725 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:43.725 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:43.982 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:43.982 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:43.982 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:43.982 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:43.982 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.982 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:43.982 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.982 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:43.982 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.982 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.983 request: 00:21:43.983 { 00:21:43.983 "name": "nvme0", 00:21:43.983 "trtype": "tcp", 00:21:43.983 "traddr": "10.0.0.2", 00:21:43.983 "adrfam": "ipv4", 00:21:43.983 "trsvcid": "4420", 00:21:43.983 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:43.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:43.983 "prchk_reftag": false, 00:21:43.983 "prchk_guard": false, 00:21:43.983 "hdgst": false, 00:21:43.983 "ddgst": false, 00:21:43.983 "dhchap_key": "key3", 00:21:43.983 "allow_unrecognized_csi": false, 00:21:43.983 "method": "bdev_nvme_attach_controller", 00:21:43.983 "req_id": 1 00:21:43.983 } 00:21:43.983 Got JSON-RPC error response 00:21:43.983 response: 00:21:43.983 { 00:21:43.983 "code": -5, 00:21:43.983 "message": "Input/output error" 00:21:43.983 } 00:21:43.983 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:43.983 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:43.983 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:43.983 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:43.983 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:43.983 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:43.983 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:43.983 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:43.983 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:44.240 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:44.804 request: 00:21:44.804 { 00:21:44.804 "name": "nvme0", 00:21:44.804 "trtype": "tcp", 00:21:44.804 "traddr": "10.0.0.2", 00:21:44.804 "adrfam": "ipv4", 00:21:44.804 "trsvcid": "4420", 00:21:44.804 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:44.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:44.804 "prchk_reftag": false, 00:21:44.804 "prchk_guard": false, 00:21:44.804 "hdgst": false, 00:21:44.804 "ddgst": false, 00:21:44.804 "dhchap_key": "key0", 00:21:44.804 "dhchap_ctrlr_key": "key1", 00:21:44.804 "allow_unrecognized_csi": false, 00:21:44.804 "method": "bdev_nvme_attach_controller", 00:21:44.804 "req_id": 1 00:21:44.804 } 00:21:44.804 Got JSON-RPC error response 00:21:44.804 response: 00:21:44.804 { 00:21:44.804 "code": -5, 00:21:44.804 "message": "Input/output error" 00:21:44.804 } 00:21:44.804 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:44.804 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:44.804 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:44.804 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:44.804 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:44.804 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:44.804 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:44.804 nvme0n1 00:21:44.804 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:44.804 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:44.804 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.061 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.061 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.061 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.320 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:21:45.320 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.320 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.320 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.320 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:45.320 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:45.320 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:46.255 nvme0n1 00:21:46.255 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:46.255 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:46.255 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.255 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.255 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:46.255 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.255 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.255 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.255 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:46.255 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:46.255 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.526 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.526 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:46.526 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: --dhchap-ctrl-secret DHHC-1:03:ODc3OTRhYjNlZjY4NTBkMjVmMDk4YmVmMmMyYmE5MzMyZTE3N2RiNGFiODFkMzUwNzQ2ZTBjNDEzZjgzODNiMYuHKkU=: 00:21:47.092 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:47.092 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:47.092 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:47.092 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:47.092 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:47.092 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:47.092 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:47.092 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.092 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.349 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:47.349 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:47.349 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:47.349 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:47.349 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.349 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:47.349 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.349 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:47.349 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:47.349 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:47.607 request: 00:21:47.607 { 00:21:47.607 "name": "nvme0", 00:21:47.607 "trtype": "tcp", 00:21:47.607 "traddr": "10.0.0.2", 00:21:47.607 "adrfam": "ipv4", 00:21:47.607 "trsvcid": "4420", 00:21:47.607 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:47.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:47.607 "prchk_reftag": false, 00:21:47.607 "prchk_guard": false, 00:21:47.607 "hdgst": false, 00:21:47.607 "ddgst": false, 00:21:47.607 "dhchap_key": "key1", 00:21:47.607 "allow_unrecognized_csi": false, 00:21:47.607 "method": "bdev_nvme_attach_controller", 00:21:47.607 "req_id": 1 00:21:47.607 } 00:21:47.607 Got JSON-RPC error response 00:21:47.607 response: 00:21:47.607 { 00:21:47.607 "code": -5, 00:21:47.607 "message": "Input/output error" 00:21:47.607 } 00:21:47.607 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:47.607 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.607 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.607 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.607 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:47.607 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:47.607 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:48.542 nvme0n1 00:21:48.542 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:48.542 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.542 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:48.542 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.542 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.542 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.801 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:48.801 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.801 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.801 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.801 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:48.801 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:48.801 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:49.059 nvme0n1 00:21:49.059 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:49.059 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:49.059 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.318 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.318 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.318 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: '' 2s 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: ]] 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDM0YzUwOTNkZmU0N2FhYTAxOTVhMTk0ZjgwMzcwNDemY8CW: 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:49.577 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: 2s 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:51.489 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:51.490 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: ]] 00:21:51.490 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:N2JlYzUyZWQ0ZDJiMWNlNDZhM2QzZGRkYmNjNDc3ZDI0ODQ1MGJkOWM5MjYzNmJh+lVeUQ==: 00:21:51.490 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:51.490 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:54.022 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:54.589 nvme0n1 00:21:54.589 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:54.589 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.589 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.589 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.589 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:54.589 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:54.848 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:54.848 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:54.848 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.107 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.107 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.107 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.107 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.107 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.107 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:55.107 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:55.365 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:55.365 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:55.365 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.365 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.365 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:55.365 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.365 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.365 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.365 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:55.366 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:55.366 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:55.366 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:55.623 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.623 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:55.623 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.623 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:55.623 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:55.881 request: 00:21:55.881 { 00:21:55.881 "name": "nvme0", 00:21:55.881 "dhchap_key": "key1", 00:21:55.881 "dhchap_ctrlr_key": "key3", 00:21:55.881 "method": "bdev_nvme_set_keys", 00:21:55.881 "req_id": 1 00:21:55.881 } 00:21:55.881 Got JSON-RPC error response 00:21:55.881 response: 00:21:55.881 { 00:21:55.881 "code": -13, 00:21:55.881 "message": "Permission denied" 00:21:55.881 } 00:21:55.881 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:55.881 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.881 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.881 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.881 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:55.881 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:55.881 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.140 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:56.140 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:57.073 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:57.073 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:57.073 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.332 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:57.332 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:57.332 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.332 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.332 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.332 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:57.332 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:57.332 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:58.268 nvme0n1 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:58.268 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:58.526 request: 00:21:58.526 { 00:21:58.526 "name": "nvme0", 00:21:58.526 "dhchap_key": "key2", 00:21:58.526 "dhchap_ctrlr_key": "key0", 00:21:58.526 "method": "bdev_nvme_set_keys", 00:21:58.526 "req_id": 1 00:21:58.526 } 00:21:58.526 Got JSON-RPC error response 00:21:58.526 response: 00:21:58.526 { 00:21:58.526 "code": -13, 00:21:58.526 "message": "Permission denied" 00:21:58.526 } 00:21:58.527 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:58.527 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:58.527 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:58.527 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:58.527 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:58.527 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.527 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:58.801 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:58.801 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:59.735 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:59.735 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:59.735 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1248472 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1248472 ']' 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1248472 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1248472 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1248472' 00:21:59.994 killing process with pid 1248472 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1248472 00:21:59.994 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1248472 00:22:00.253 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:00.253 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:00.253 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:00.253 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:00.253 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:00.253 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:00.253 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:00.253 rmmod nvme_tcp 00:22:00.253 rmmod nvme_fabrics 00:22:00.512 rmmod nvme_keyring 00:22:00.512 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 1270242 ']' 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 1270242 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1270242 ']' 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1270242 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1270242 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1270242' 00:22:00.512 killing process with pid 1270242 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1270242 00:22:00.512 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1270242 00:22:00.770 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:00.770 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:00.770 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:00.770 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:00.770 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:22:00.770 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:22:00.770 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:00.770 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:00.770 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:00.770 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.770 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.770 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.674 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:02.674 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.B43 /tmp/spdk.key-sha256.ImE /tmp/spdk.key-sha384.xlN /tmp/spdk.key-sha512.EPQ /tmp/spdk.key-sha512.0P1 /tmp/spdk.key-sha384.PjO /tmp/spdk.key-sha256.sbU '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:02.674 00:22:02.674 real 2m31.400s 00:22:02.674 user 5m48.448s 00:22:02.674 sys 0m23.860s 00:22:02.674 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:02.674 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.674 ************************************ 00:22:02.674 END TEST nvmf_auth_target 00:22:02.674 ************************************ 00:22:02.674 09:56:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:02.674 09:56:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:02.674 09:56:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:02.674 09:56:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:02.674 09:56:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:02.934 ************************************ 00:22:02.934 START TEST nvmf_bdevio_no_huge 00:22:02.934 ************************************ 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:02.934 * Looking for test storage... 00:22:02.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:02.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.934 --rc genhtml_branch_coverage=1 00:22:02.934 --rc genhtml_function_coverage=1 00:22:02.934 --rc genhtml_legend=1 00:22:02.934 --rc geninfo_all_blocks=1 00:22:02.934 --rc geninfo_unexecuted_blocks=1 00:22:02.934 00:22:02.934 ' 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:02.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.934 --rc genhtml_branch_coverage=1 00:22:02.934 --rc genhtml_function_coverage=1 00:22:02.934 --rc genhtml_legend=1 00:22:02.934 --rc geninfo_all_blocks=1 00:22:02.934 --rc geninfo_unexecuted_blocks=1 00:22:02.934 00:22:02.934 ' 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:02.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.934 --rc genhtml_branch_coverage=1 00:22:02.934 --rc genhtml_function_coverage=1 00:22:02.934 --rc genhtml_legend=1 00:22:02.934 --rc geninfo_all_blocks=1 00:22:02.934 --rc geninfo_unexecuted_blocks=1 00:22:02.934 00:22:02.934 ' 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:02.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.934 --rc genhtml_branch_coverage=1 00:22:02.934 --rc genhtml_function_coverage=1 00:22:02.934 --rc genhtml_legend=1 00:22:02.934 --rc geninfo_all_blocks=1 00:22:02.934 --rc geninfo_unexecuted_blocks=1 00:22:02.934 00:22:02.934 ' 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:02.934 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.935 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:08.203 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.203 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.203 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.203 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.203 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.203 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:08.204 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:08.204 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:08.204 Found net devices under 0000:86:00.0: cvl_0_0 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:08.204 Found net devices under 0000:86:00.1: cvl_0_1 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.204 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:08.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:22:08.204 00:22:08.204 --- 10.0.0.2 ping statistics --- 00:22:08.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.204 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:22:08.205 00:22:08.205 --- 10.0.0.1 ping statistics --- 00:22:08.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.205 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=1276960 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 1276960 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1276960 ']' 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:08.205 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:08.205 [2024-12-07 09:56:36.738757] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:08.205 [2024-12-07 09:56:36.738803] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:08.205 [2024-12-07 09:56:36.800249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.205 [2024-12-07 09:56:36.867363] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.205 [2024-12-07 09:56:36.867400] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.205 [2024-12-07 09:56:36.867406] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.205 [2024-12-07 09:56:36.867412] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.205 [2024-12-07 09:56:36.867417] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.205 [2024-12-07 09:56:36.867530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:22:08.205 [2024-12-07 09:56:36.867641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:22:08.205 [2024-12-07 09:56:36.867750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.205 [2024-12-07 09:56:36.867751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:22:09.188 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:09.188 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:09.188 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:09.188 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:09.188 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.188 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.188 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.188 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.188 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.188 [2024-12-07 09:56:37.620390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.189 Malloc0 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.189 [2024-12-07 09:56:37.664671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:09.189 { 00:22:09.189 "params": { 00:22:09.189 "name": "Nvme$subsystem", 00:22:09.189 "trtype": "$TEST_TRANSPORT", 00:22:09.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.189 "adrfam": "ipv4", 00:22:09.189 "trsvcid": "$NVMF_PORT", 00:22:09.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.189 "hdgst": ${hdgst:-false}, 00:22:09.189 "ddgst": ${ddgst:-false} 00:22:09.189 }, 00:22:09.189 "method": "bdev_nvme_attach_controller" 00:22:09.189 } 00:22:09.189 EOF 00:22:09.189 )") 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:22:09.189 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:09.189 "params": { 00:22:09.189 "name": "Nvme1", 00:22:09.189 "trtype": "tcp", 00:22:09.189 "traddr": "10.0.0.2", 00:22:09.189 "adrfam": "ipv4", 00:22:09.189 "trsvcid": "4420", 00:22:09.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.189 "hdgst": false, 00:22:09.189 "ddgst": false 00:22:09.189 }, 00:22:09.189 "method": "bdev_nvme_attach_controller" 00:22:09.189 }' 00:22:09.189 [2024-12-07 09:56:37.714880] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:09.189 [2024-12-07 09:56:37.714930] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1277114 ] 00:22:09.189 [2024-12-07 09:56:37.772619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:09.189 [2024-12-07 09:56:37.839612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.189 [2024-12-07 09:56:37.839708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.189 [2024-12-07 09:56:37.839709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.471 I/O targets: 00:22:09.471 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:09.471 00:22:09.471 00:22:09.471 CUnit - A unit testing framework for C - Version 2.1-3 00:22:09.471 http://cunit.sourceforge.net/ 00:22:09.471 00:22:09.471 00:22:09.471 Suite: bdevio tests on: Nvme1n1 00:22:09.471 Test: blockdev write read block ...passed 00:22:09.471 Test: blockdev write zeroes read block ...passed 00:22:09.471 Test: blockdev write zeroes read no split ...passed 00:22:09.471 Test: blockdev write zeroes read split ...passed 00:22:09.471 Test: blockdev write zeroes read split partial ...passed 00:22:09.471 Test: blockdev reset ...[2024-12-07 09:56:38.163427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:09.471 [2024-12-07 09:56:38.163492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56cb90 (9): Bad file descriptor 00:22:09.740 [2024-12-07 09:56:38.232706] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:09.740 passed 00:22:09.740 Test: blockdev write read 8 blocks ...passed 00:22:09.740 Test: blockdev write read size > 128k ...passed 00:22:09.740 Test: blockdev write read invalid size ...passed 00:22:09.741 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:09.741 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:09.741 Test: blockdev write read max offset ...passed 00:22:09.741 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:09.741 Test: blockdev writev readv 8 blocks ...passed 00:22:09.741 Test: blockdev writev readv 30 x 1block ...passed 00:22:09.999 Test: blockdev writev readv block ...passed 00:22:09.999 Test: blockdev writev readv size > 128k ...passed 00:22:09.999 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:09.999 Test: blockdev comparev and writev ...[2024-12-07 09:56:38.525170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:09.999 [2024-12-07 09:56:38.525202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.999 [2024-12-07 09:56:38.525218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:09.999 [2024-12-07 09:56:38.525226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.999 [2024-12-07 09:56:38.525490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:09.999 [2024-12-07 09:56:38.525500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:09.999 [2024-12-07 09:56:38.525512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:09.999 [2024-12-07 09:56:38.525519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:09.999 [2024-12-07 09:56:38.525771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:09.999 [2024-12-07 09:56:38.525781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:09.999 [2024-12-07 09:56:38.525792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:09.999 [2024-12-07 09:56:38.525799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:09.999 [2024-12-07 09:56:38.526053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:09.999 [2024-12-07 09:56:38.526064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:09.999 [2024-12-07 09:56:38.526076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:10.000 [2024-12-07 09:56:38.526086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:10.000 passed 00:22:10.000 Test: blockdev nvme passthru rw ...passed 00:22:10.000 Test: blockdev nvme passthru vendor specific ...[2024-12-07 09:56:38.610244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:10.000 [2024-12-07 09:56:38.610259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:10.000 [2024-12-07 09:56:38.610379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:10.000 [2024-12-07 09:56:38.610389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:10.000 [2024-12-07 09:56:38.610507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:10.000 [2024-12-07 09:56:38.610516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:10.000 [2024-12-07 09:56:38.610632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:10.000 [2024-12-07 09:56:38.610642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:10.000 passed 00:22:10.000 Test: blockdev nvme admin passthru ...passed 00:22:10.000 Test: blockdev copy ...passed 00:22:10.000 00:22:10.000 Run Summary: Type Total Ran Passed Failed Inactive 00:22:10.000 suites 1 1 n/a 0 0 00:22:10.000 tests 23 23 23 0 0 00:22:10.000 asserts 152 152 152 0 n/a 00:22:10.000 00:22:10.000 Elapsed time = 1.247 seconds 00:22:10.259 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.259 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.259 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.259 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.259 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:10.259 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:10.259 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:10.259 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:10.259 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:10.259 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:10.259 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:10.259 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:10.259 rmmod nvme_tcp 00:22:10.259 rmmod nvme_fabrics 00:22:10.518 rmmod nvme_keyring 00:22:10.518 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:10.518 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:10.518 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:10.518 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 1276960 ']' 00:22:10.518 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 1276960 00:22:10.518 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1276960 ']' 00:22:10.518 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1276960 00:22:10.518 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:10.518 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:10.518 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1276960 00:22:10.519 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:10.519 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:10.519 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1276960' 00:22:10.519 killing process with pid 1276960 00:22:10.519 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1276960 00:22:10.519 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1276960 00:22:10.778 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:10.778 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:10.778 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:10.778 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:10.778 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:22:10.778 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:22:10.778 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:10.778 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:10.778 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:10.778 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.778 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.778 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:13.310 00:22:13.310 real 0m10.064s 00:22:13.310 user 0m13.394s 00:22:13.310 sys 0m4.866s 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.310 ************************************ 00:22:13.310 END TEST nvmf_bdevio_no_huge 00:22:13.310 ************************************ 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:13.310 ************************************ 00:22:13.310 START TEST nvmf_tls 00:22:13.310 ************************************ 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:13.310 * Looking for test storage... 00:22:13.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.310 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:13.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.311 --rc genhtml_branch_coverage=1 00:22:13.311 --rc genhtml_function_coverage=1 00:22:13.311 --rc genhtml_legend=1 00:22:13.311 --rc geninfo_all_blocks=1 00:22:13.311 --rc geninfo_unexecuted_blocks=1 00:22:13.311 00:22:13.311 ' 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:13.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.311 --rc genhtml_branch_coverage=1 00:22:13.311 --rc genhtml_function_coverage=1 00:22:13.311 --rc genhtml_legend=1 00:22:13.311 --rc geninfo_all_blocks=1 00:22:13.311 --rc geninfo_unexecuted_blocks=1 00:22:13.311 00:22:13.311 ' 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:13.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.311 --rc genhtml_branch_coverage=1 00:22:13.311 --rc genhtml_function_coverage=1 00:22:13.311 --rc genhtml_legend=1 00:22:13.311 --rc geninfo_all_blocks=1 00:22:13.311 --rc geninfo_unexecuted_blocks=1 00:22:13.311 00:22:13.311 ' 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:13.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.311 --rc genhtml_branch_coverage=1 00:22:13.311 --rc genhtml_function_coverage=1 00:22:13.311 --rc genhtml_legend=1 00:22:13.311 --rc geninfo_all_blocks=1 00:22:13.311 --rc geninfo_unexecuted_blocks=1 00:22:13.311 00:22:13.311 ' 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:13.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:13.311 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.581 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.581 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:18.581 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:18.581 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:18.582 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:18.582 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:18.582 Found net devices under 0000:86:00.0: cvl_0_0 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:18.582 Found net devices under 0000:86:00.1: cvl_0_1 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.582 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:18.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:22:18.841 00:22:18.841 --- 10.0.0.2 ping statistics --- 00:22:18.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.841 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:22:18.841 00:22:18.841 --- 10.0.0.1 ping statistics --- 00:22:18.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.841 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1280765 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1280765 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1280765 ']' 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:18.841 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.841 [2024-12-07 09:56:47.433923] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:18.841 [2024-12-07 09:56:47.433983] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.841 [2024-12-07 09:56:47.497288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.841 [2024-12-07 09:56:47.538617] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.841 [2024-12-07 09:56:47.538657] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.841 [2024-12-07 09:56:47.538665] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.841 [2024-12-07 09:56:47.538671] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.841 [2024-12-07 09:56:47.538676] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.841 [2024-12-07 09:56:47.538694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.099 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.099 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:19.099 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:19.099 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:19.099 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.099 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.099 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:19.099 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:19.099 true 00:22:19.099 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:19.099 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:19.357 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:19.357 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:19.357 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:19.616 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:19.616 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:19.874 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:19.874 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:19.874 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:19.874 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:19.874 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:20.132 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:20.132 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:20.132 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:20.132 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:20.390 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:20.390 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:20.390 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:20.647 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:20.647 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:20.647 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:20.647 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:20.647 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:20.906 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:20.906 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.yBQuCWndqe 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.cUzA0etz18 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:21.164 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:21.165 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.yBQuCWndqe 00:22:21.165 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.cUzA0etz18 00:22:21.165 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:21.423 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:21.681 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.yBQuCWndqe 00:22:21.681 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yBQuCWndqe 00:22:21.681 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:21.939 [2024-12-07 09:56:50.452676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.939 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:21.939 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:22.198 [2024-12-07 09:56:50.837668] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:22.198 [2024-12-07 09:56:50.837912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.198 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:22.456 malloc0 00:22:22.456 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:22.713 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yBQuCWndqe 00:22:22.713 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:22.971 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yBQuCWndqe 00:22:35.170 Initializing NVMe Controllers 00:22:35.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:35.170 Initialization complete. Launching workers. 00:22:35.170 ======================================================== 00:22:35.170 Latency(us) 00:22:35.170 Device Information : IOPS MiB/s Average min max 00:22:35.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16317.46 63.74 3922.32 784.77 5587.02 00:22:35.170 ======================================================== 00:22:35.170 Total : 16317.46 63.74 3922.32 784.77 5587.02 00:22:35.170 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yBQuCWndqe 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yBQuCWndqe 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1283303 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1283303 /var/tmp/bdevperf.sock 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1283303 ']' 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.170 [2024-12-07 09:57:01.765108] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:35.170 [2024-12-07 09:57:01.765158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283303 ] 00:22:35.170 [2024-12-07 09:57:01.816834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.170 [2024-12-07 09:57:01.858201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:35.170 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yBQuCWndqe 00:22:35.170 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:35.170 [2024-12-07 09:57:02.300952] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.170 TLSTESTn1 00:22:35.170 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:35.170 Running I/O for 10 seconds... 00:22:36.105 5551.00 IOPS, 21.68 MiB/s [2024-12-07T08:57:05.772Z] 5551.00 IOPS, 21.68 MiB/s [2024-12-07T08:57:06.701Z] 5566.67 IOPS, 21.74 MiB/s [2024-12-07T08:57:07.634Z] 5578.00 IOPS, 21.79 MiB/s [2024-12-07T08:57:08.569Z] 5508.80 IOPS, 21.52 MiB/s [2024-12-07T08:57:09.943Z] 5509.17 IOPS, 21.52 MiB/s [2024-12-07T08:57:10.877Z] 5508.71 IOPS, 21.52 MiB/s [2024-12-07T08:57:11.828Z] 5476.00 IOPS, 21.39 MiB/s [2024-12-07T08:57:12.764Z] 5492.67 IOPS, 21.46 MiB/s [2024-12-07T08:57:12.764Z] 5493.50 IOPS, 21.46 MiB/s 00:22:44.038 Latency(us) 00:22:44.038 [2024-12-07T08:57:12.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.038 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:44.038 Verification LBA range: start 0x0 length 0x2000 00:22:44.038 TLSTESTn1 : 10.02 5497.00 21.47 0.00 0.00 23248.98 5014.93 23934.89 00:22:44.038 [2024-12-07T08:57:12.764Z] =================================================================================================================== 00:22:44.038 [2024-12-07T08:57:12.764Z] Total : 5497.00 21.47 0.00 0.00 23248.98 5014.93 23934.89 00:22:44.038 { 00:22:44.038 "results": [ 00:22:44.038 { 00:22:44.038 "job": "TLSTESTn1", 00:22:44.038 "core_mask": "0x4", 00:22:44.038 "workload": "verify", 00:22:44.038 "status": "finished", 00:22:44.038 "verify_range": { 00:22:44.038 "start": 0, 00:22:44.038 "length": 8192 00:22:44.038 }, 00:22:44.038 "queue_depth": 128, 00:22:44.038 "io_size": 4096, 00:22:44.038 "runtime": 10.016563, 00:22:44.038 "iops": 5496.9953266404855, 00:22:44.038 "mibps": 21.472637994689396, 00:22:44.038 "io_failed": 0, 00:22:44.038 "io_timeout": 0, 00:22:44.038 "avg_latency_us": 23248.978062117665, 00:22:44.038 "min_latency_us": 5014.928695652174, 00:22:44.038 "max_latency_us": 23934.88695652174 00:22:44.038 } 00:22:44.038 ], 00:22:44.038 "core_count": 1 00:22:44.038 } 00:22:44.038 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.038 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1283303 00:22:44.038 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1283303 ']' 00:22:44.038 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1283303 00:22:44.038 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:44.038 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.038 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1283303 00:22:44.038 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:44.038 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:44.038 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1283303' 00:22:44.038 killing process with pid 1283303 00:22:44.038 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1283303 00:22:44.038 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.038 00:22:44.038 Latency(us) 00:22:44.038 [2024-12-07T08:57:12.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.038 [2024-12-07T08:57:12.764Z] =================================================================================================================== 00:22:44.038 [2024-12-07T08:57:12.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.038 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1283303 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cUzA0etz18 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cUzA0etz18 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cUzA0etz18 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cUzA0etz18 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1285459 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1285459 /var/tmp/bdevperf.sock 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1285459 ']' 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.296 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.296 [2024-12-07 09:57:12.841405] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:44.296 [2024-12-07 09:57:12.841451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285459 ] 00:22:44.296 [2024-12-07 09:57:12.891404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.296 [2024-12-07 09:57:12.931822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.296 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:44.296 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:44.296 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cUzA0etz18 00:22:44.555 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.813 [2024-12-07 09:57:13.364892] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:44.813 [2024-12-07 09:57:13.369735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:44.814 [2024-12-07 09:57:13.370326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113ce20 (107): Transport endpoint is not connected 00:22:44.814 [2024-12-07 09:57:13.371318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113ce20 (9): Bad file descriptor 00:22:44.814 [2024-12-07 09:57:13.372319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:44.814 [2024-12-07 09:57:13.372329] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:44.814 [2024-12-07 09:57:13.372338] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:44.814 [2024-12-07 09:57:13.372349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:44.814 request: 00:22:44.814 { 00:22:44.814 "name": "TLSTEST", 00:22:44.814 "trtype": "tcp", 00:22:44.814 "traddr": "10.0.0.2", 00:22:44.814 "adrfam": "ipv4", 00:22:44.814 "trsvcid": "4420", 00:22:44.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.814 "prchk_reftag": false, 00:22:44.814 "prchk_guard": false, 00:22:44.814 "hdgst": false, 00:22:44.814 "ddgst": false, 00:22:44.814 "psk": "key0", 00:22:44.814 "allow_unrecognized_csi": false, 00:22:44.814 "method": "bdev_nvme_attach_controller", 00:22:44.814 "req_id": 1 00:22:44.814 } 00:22:44.814 Got JSON-RPC error response 00:22:44.814 response: 00:22:44.814 { 00:22:44.814 "code": -5, 00:22:44.814 "message": "Input/output error" 00:22:44.814 } 00:22:44.814 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1285459 00:22:44.814 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1285459 ']' 00:22:44.814 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1285459 00:22:44.814 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:44.814 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.814 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1285459 00:22:44.814 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:44.814 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:44.814 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1285459' 00:22:44.814 killing process with pid 1285459 00:22:44.814 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1285459 00:22:44.814 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.814 00:22:44.814 Latency(us) 00:22:44.814 [2024-12-07T08:57:13.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.814 [2024-12-07T08:57:13.540Z] =================================================================================================================== 00:22:44.814 [2024-12-07T08:57:13.540Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:44.814 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1285459 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yBQuCWndqe 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yBQuCWndqe 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yBQuCWndqe 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yBQuCWndqe 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1285685 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1285685 /var/tmp/bdevperf.sock 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1285685 ']' 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.073 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.073 [2024-12-07 09:57:13.658639] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:45.073 [2024-12-07 09:57:13.658688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285685 ] 00:22:45.074 [2024-12-07 09:57:13.712409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.074 [2024-12-07 09:57:13.748439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.332 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.332 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:45.332 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yBQuCWndqe 00:22:45.332 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:45.590 [2024-12-07 09:57:14.189799] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:45.590 [2024-12-07 09:57:14.199702] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:45.590 [2024-12-07 09:57:14.199726] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:45.590 [2024-12-07 09:57:14.199752] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:45.590 [2024-12-07 09:57:14.200267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdbe20 (107): Transport endpoint is not connected 00:22:45.590 [2024-12-07 09:57:14.201261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdbe20 (9): Bad file descriptor 00:22:45.590 [2024-12-07 09:57:14.202262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:45.591 [2024-12-07 09:57:14.202272] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:45.591 [2024-12-07 09:57:14.202280] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:45.591 [2024-12-07 09:57:14.202291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:45.591 request: 00:22:45.591 { 00:22:45.591 "name": "TLSTEST", 00:22:45.591 "trtype": "tcp", 00:22:45.591 "traddr": "10.0.0.2", 00:22:45.591 "adrfam": "ipv4", 00:22:45.591 "trsvcid": "4420", 00:22:45.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.591 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:45.591 "prchk_reftag": false, 00:22:45.591 "prchk_guard": false, 00:22:45.591 "hdgst": false, 00:22:45.591 "ddgst": false, 00:22:45.591 "psk": "key0", 00:22:45.591 "allow_unrecognized_csi": false, 00:22:45.591 "method": "bdev_nvme_attach_controller", 00:22:45.591 "req_id": 1 00:22:45.591 } 00:22:45.591 Got JSON-RPC error response 00:22:45.591 response: 00:22:45.591 { 00:22:45.591 "code": -5, 00:22:45.591 "message": "Input/output error" 00:22:45.591 } 00:22:45.591 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1285685 00:22:45.591 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1285685 ']' 00:22:45.591 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1285685 00:22:45.591 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:45.591 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.591 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1285685 00:22:45.591 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:45.591 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:45.591 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1285685' 00:22:45.591 killing process with pid 1285685 00:22:45.591 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1285685 00:22:45.591 Received shutdown signal, test time was about 10.000000 seconds 00:22:45.591 00:22:45.591 Latency(us) 00:22:45.591 [2024-12-07T08:57:14.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.591 [2024-12-07T08:57:14.317Z] =================================================================================================================== 00:22:45.591 [2024-12-07T08:57:14.317Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:45.591 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1285685 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yBQuCWndqe 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yBQuCWndqe 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yBQuCWndqe 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yBQuCWndqe 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1285724 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1285724 /var/tmp/bdevperf.sock 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1285724 ']' 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.849 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.849 [2024-12-07 09:57:14.493075] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:45.849 [2024-12-07 09:57:14.493127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285724 ] 00:22:45.849 [2024-12-07 09:57:14.544419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.108 [2024-12-07 09:57:14.583048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.108 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.108 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:46.108 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yBQuCWndqe 00:22:46.366 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:46.366 [2024-12-07 09:57:15.036156] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.366 [2024-12-07 09:57:15.041867] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:46.366 [2024-12-07 09:57:15.041890] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:46.366 [2024-12-07 09:57:15.041916] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:46.366 [2024-12-07 09:57:15.042601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214be20 (107): Transport endpoint is not connected 00:22:46.366 [2024-12-07 09:57:15.043596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214be20 (9): Bad file descriptor 00:22:46.366 [2024-12-07 09:57:15.044598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:46.366 [2024-12-07 09:57:15.044609] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:46.366 [2024-12-07 09:57:15.044617] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:46.366 [2024-12-07 09:57:15.044628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:46.366 request: 00:22:46.366 { 00:22:46.366 "name": "TLSTEST", 00:22:46.366 "trtype": "tcp", 00:22:46.366 "traddr": "10.0.0.2", 00:22:46.366 "adrfam": "ipv4", 00:22:46.366 "trsvcid": "4420", 00:22:46.366 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:46.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.366 "prchk_reftag": false, 00:22:46.366 "prchk_guard": false, 00:22:46.366 "hdgst": false, 00:22:46.366 "ddgst": false, 00:22:46.366 "psk": "key0", 00:22:46.366 "allow_unrecognized_csi": false, 00:22:46.366 "method": "bdev_nvme_attach_controller", 00:22:46.366 "req_id": 1 00:22:46.366 } 00:22:46.366 Got JSON-RPC error response 00:22:46.366 response: 00:22:46.366 { 00:22:46.366 "code": -5, 00:22:46.366 "message": "Input/output error" 00:22:46.366 } 00:22:46.366 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1285724 00:22:46.366 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1285724 ']' 00:22:46.366 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1285724 00:22:46.366 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:46.366 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:46.367 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1285724 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1285724' 00:22:46.624 killing process with pid 1285724 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1285724 00:22:46.624 Received shutdown signal, test time was about 10.000000 seconds 00:22:46.624 00:22:46.624 Latency(us) 00:22:46.624 [2024-12-07T08:57:15.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.624 [2024-12-07T08:57:15.350Z] =================================================================================================================== 00:22:46.624 [2024-12-07T08:57:15.350Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1285724 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1285938 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1285938 /var/tmp/bdevperf.sock 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1285938 ']' 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.624 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.624 [2024-12-07 09:57:15.337221] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:46.624 [2024-12-07 09:57:15.337273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285938 ] 00:22:46.882 [2024-12-07 09:57:15.387509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.882 [2024-12-07 09:57:15.423672] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.882 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.882 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:46.882 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:47.139 [2024-12-07 09:57:15.676703] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:47.139 [2024-12-07 09:57:15.676739] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:47.139 request: 00:22:47.139 { 00:22:47.139 "name": "key0", 00:22:47.139 "path": "", 00:22:47.139 "method": "keyring_file_add_key", 00:22:47.139 "req_id": 1 00:22:47.139 } 00:22:47.139 Got JSON-RPC error response 00:22:47.139 response: 00:22:47.139 { 00:22:47.139 "code": -1, 00:22:47.139 "message": "Operation not permitted" 00:22:47.139 } 00:22:47.139 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:47.396 [2024-12-07 09:57:15.869283] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.396 [2024-12-07 09:57:15.869310] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:47.396 request: 00:22:47.396 { 00:22:47.396 "name": "TLSTEST", 00:22:47.396 "trtype": "tcp", 00:22:47.396 "traddr": "10.0.0.2", 00:22:47.396 "adrfam": "ipv4", 00:22:47.396 "trsvcid": "4420", 00:22:47.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.396 "prchk_reftag": false, 00:22:47.396 "prchk_guard": false, 00:22:47.396 "hdgst": false, 00:22:47.396 "ddgst": false, 00:22:47.396 "psk": "key0", 00:22:47.396 "allow_unrecognized_csi": false, 00:22:47.396 "method": "bdev_nvme_attach_controller", 00:22:47.396 "req_id": 1 00:22:47.396 } 00:22:47.396 Got JSON-RPC error response 00:22:47.396 response: 00:22:47.396 { 00:22:47.396 "code": -126, 00:22:47.396 "message": "Required key not available" 00:22:47.396 } 00:22:47.396 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1285938 00:22:47.396 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1285938 ']' 00:22:47.396 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1285938 00:22:47.396 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:47.396 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:47.396 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1285938 00:22:47.396 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:47.396 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:47.396 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1285938' 00:22:47.396 killing process with pid 1285938 00:22:47.396 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1285938 00:22:47.396 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.396 00:22:47.396 Latency(us) 00:22:47.396 [2024-12-07T08:57:16.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.396 [2024-12-07T08:57:16.122Z] =================================================================================================================== 00:22:47.396 [2024-12-07T08:57:16.122Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:47.396 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1285938 00:22:47.396 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:47.396 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:47.396 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:47.396 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:47.396 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:47.396 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1280765 00:22:47.396 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1280765 ']' 00:22:47.396 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1280765 00:22:47.396 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:47.396 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:47.396 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1280765 00:22:47.654 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:47.654 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:47.654 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1280765' 00:22:47.655 killing process with pid 1280765 00:22:47.655 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1280765 00:22:47.655 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1280765 00:22:47.655 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:47.655 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:47.655 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:47.655 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:47.655 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:47.655 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:22:47.655 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.il7fgHUYpt 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.il7fgHUYpt 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1286182 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1286182 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1286182 ']' 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.914 [2024-12-07 09:57:16.449581] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:47.914 [2024-12-07 09:57:16.449629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.914 [2024-12-07 09:57:16.508366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.914 [2024-12-07 09:57:16.548228] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.914 [2024-12-07 09:57:16.548269] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.914 [2024-12-07 09:57:16.548276] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.914 [2024-12-07 09:57:16.548281] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.914 [2024-12-07 09:57:16.548286] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.914 [2024-12-07 09:57:16.548321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:47.914 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.172 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.172 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.il7fgHUYpt 00:22:48.172 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.il7fgHUYpt 00:22:48.172 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:48.172 [2024-12-07 09:57:16.837717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.172 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:48.429 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:48.686 [2024-12-07 09:57:17.206658] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:48.686 [2024-12-07 09:57:17.206859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.686 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:48.945 malloc0 00:22:48.945 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:48.945 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.il7fgHUYpt 00:22:49.202 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.il7fgHUYpt 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.il7fgHUYpt 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1286442 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1286442 /var/tmp/bdevperf.sock 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1286442 ']' 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.459 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.459 [2024-12-07 09:57:18.024073] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:49.459 [2024-12-07 09:57:18.024123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286442 ] 00:22:49.459 [2024-12-07 09:57:18.075029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.459 [2024-12-07 09:57:18.115571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.715 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:49.715 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:49.715 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.il7fgHUYpt 00:22:49.715 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:49.972 [2024-12-07 09:57:18.548839] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.972 TLSTESTn1 00:22:49.972 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:50.229 Running I/O for 10 seconds... 00:22:52.095 5264.00 IOPS, 20.56 MiB/s [2024-12-07T08:57:21.754Z] 5291.50 IOPS, 20.67 MiB/s [2024-12-07T08:57:23.129Z] 5385.33 IOPS, 21.04 MiB/s [2024-12-07T08:57:24.070Z] 5319.00 IOPS, 20.78 MiB/s [2024-12-07T08:57:25.004Z] 5357.40 IOPS, 20.93 MiB/s [2024-12-07T08:57:25.951Z] 5373.33 IOPS, 20.99 MiB/s [2024-12-07T08:57:26.882Z] 5371.00 IOPS, 20.98 MiB/s [2024-12-07T08:57:27.813Z] 5345.12 IOPS, 20.88 MiB/s [2024-12-07T08:57:29.187Z] 5344.00 IOPS, 20.88 MiB/s [2024-12-07T08:57:29.187Z] 5373.90 IOPS, 20.99 MiB/s 00:23:00.461 Latency(us) 00:23:00.461 [2024-12-07T08:57:29.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.461 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:00.461 Verification LBA range: start 0x0 length 0x2000 00:23:00.461 TLSTESTn1 : 10.02 5377.79 21.01 0.00 0.00 23763.49 5869.75 26898.25 00:23:00.461 [2024-12-07T08:57:29.187Z] =================================================================================================================== 00:23:00.461 [2024-12-07T08:57:29.187Z] Total : 5377.79 21.01 0.00 0.00 23763.49 5869.75 26898.25 00:23:00.461 { 00:23:00.461 "results": [ 00:23:00.461 { 00:23:00.461 "job": "TLSTESTn1", 00:23:00.461 "core_mask": "0x4", 00:23:00.461 "workload": "verify", 00:23:00.461 "status": "finished", 00:23:00.461 "verify_range": { 00:23:00.461 "start": 0, 00:23:00.461 "length": 8192 00:23:00.461 }, 00:23:00.461 "queue_depth": 128, 00:23:00.461 "io_size": 4096, 00:23:00.461 "runtime": 10.016199, 00:23:00.461 "iops": 5377.788520375843, 00:23:00.461 "mibps": 21.006986407718138, 00:23:00.461 "io_failed": 0, 00:23:00.461 "io_timeout": 0, 00:23:00.461 "avg_latency_us": 23763.486534565076, 00:23:00.461 "min_latency_us": 5869.746086956522, 00:23:00.461 "max_latency_us": 26898.253913043478 00:23:00.461 } 00:23:00.461 ], 00:23:00.461 "core_count": 1 00:23:00.461 } 00:23:00.461 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.461 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1286442 00:23:00.461 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1286442 ']' 00:23:00.461 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1286442 00:23:00.461 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:00.461 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.461 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1286442 00:23:00.461 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:00.461 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:00.461 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1286442' 00:23:00.461 killing process with pid 1286442 00:23:00.461 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1286442 00:23:00.461 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.461 00:23:00.461 Latency(us) 00:23:00.461 [2024-12-07T08:57:29.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.461 [2024-12-07T08:57:29.187Z] =================================================================================================================== 00:23:00.461 [2024-12-07T08:57:29.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.461 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1286442 00:23:00.461 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.il7fgHUYpt 00:23:00.461 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.il7fgHUYpt 00:23:00.461 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:00.461 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.il7fgHUYpt 00:23:00.461 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:00.461 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.il7fgHUYpt 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.il7fgHUYpt 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1288266 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1288266 /var/tmp/bdevperf.sock 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1288266 ']' 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:00.462 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.462 [2024-12-07 09:57:29.066671] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:00.462 [2024-12-07 09:57:29.066722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288266 ] 00:23:00.462 [2024-12-07 09:57:29.116774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.462 [2024-12-07 09:57:29.152611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.719 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:00.719 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:00.719 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.il7fgHUYpt 00:23:00.719 [2024-12-07 09:57:29.401367] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.il7fgHUYpt': 0100666 00:23:00.720 [2024-12-07 09:57:29.401404] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:00.720 request: 00:23:00.720 { 00:23:00.720 "name": "key0", 00:23:00.720 "path": "/tmp/tmp.il7fgHUYpt", 00:23:00.720 "method": "keyring_file_add_key", 00:23:00.720 "req_id": 1 00:23:00.720 } 00:23:00.720 Got JSON-RPC error response 00:23:00.720 response: 00:23:00.720 { 00:23:00.720 "code": -1, 00:23:00.720 "message": "Operation not permitted" 00:23:00.720 } 00:23:00.720 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:00.977 [2024-12-07 09:57:29.589932] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.977 [2024-12-07 09:57:29.589961] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:00.977 request: 00:23:00.977 { 00:23:00.977 "name": "TLSTEST", 00:23:00.977 "trtype": "tcp", 00:23:00.977 "traddr": "10.0.0.2", 00:23:00.977 "adrfam": "ipv4", 00:23:00.977 "trsvcid": "4420", 00:23:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.977 "prchk_reftag": false, 00:23:00.977 "prchk_guard": false, 00:23:00.977 "hdgst": false, 00:23:00.977 "ddgst": false, 00:23:00.977 "psk": "key0", 00:23:00.977 "allow_unrecognized_csi": false, 00:23:00.977 "method": "bdev_nvme_attach_controller", 00:23:00.977 "req_id": 1 00:23:00.977 } 00:23:00.977 Got JSON-RPC error response 00:23:00.977 response: 00:23:00.977 { 00:23:00.977 "code": -126, 00:23:00.977 "message": "Required key not available" 00:23:00.977 } 00:23:00.977 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1288266 00:23:00.977 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1288266 ']' 00:23:00.977 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1288266 00:23:00.977 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:00.977 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.977 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1288266 00:23:00.977 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:00.977 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:00.977 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1288266' 00:23:00.977 killing process with pid 1288266 00:23:00.977 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1288266 00:23:00.977 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.977 00:23:00.977 Latency(us) 00:23:00.977 [2024-12-07T08:57:29.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.977 [2024-12-07T08:57:29.703Z] =================================================================================================================== 00:23:00.977 [2024-12-07T08:57:29.703Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:00.977 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1288266 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1286182 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1286182 ']' 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1286182 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1286182 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1286182' 00:23:01.236 killing process with pid 1286182 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1286182 00:23:01.236 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1286182 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1288346 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1288346 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1288346 ']' 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:01.494 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.494 [2024-12-07 09:57:30.134072] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:01.494 [2024-12-07 09:57:30.134120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.494 [2024-12-07 09:57:30.192751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.753 [2024-12-07 09:57:30.233772] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.753 [2024-12-07 09:57:30.233810] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.753 [2024-12-07 09:57:30.233818] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.753 [2024-12-07 09:57:30.233825] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.753 [2024-12-07 09:57:30.233831] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.753 [2024-12-07 09:57:30.233849] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.il7fgHUYpt 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.il7fgHUYpt 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.il7fgHUYpt 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.il7fgHUYpt 00:23:01.753 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:02.011 [2024-12-07 09:57:30.535698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.011 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:02.269 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:02.269 [2024-12-07 09:57:30.912689] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:02.269 [2024-12-07 09:57:30.912892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.269 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:02.526 malloc0 00:23:02.526 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:02.783 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.il7fgHUYpt 00:23:02.783 [2024-12-07 09:57:31.498283] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.il7fgHUYpt': 0100666 00:23:02.783 [2024-12-07 09:57:31.498314] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:02.783 request: 00:23:02.783 { 00:23:02.783 "name": "key0", 00:23:02.783 "path": "/tmp/tmp.il7fgHUYpt", 00:23:02.783 "method": "keyring_file_add_key", 00:23:02.783 "req_id": 1 00:23:02.783 } 00:23:02.783 Got JSON-RPC error response 00:23:02.783 response: 00:23:02.783 { 00:23:02.783 "code": -1, 00:23:02.783 "message": "Operation not permitted" 00:23:02.783 } 00:23:03.040 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:03.040 [2024-12-07 09:57:31.686817] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:03.040 [2024-12-07 09:57:31.686863] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:03.040 request: 00:23:03.040 { 00:23:03.040 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.040 "host": "nqn.2016-06.io.spdk:host1", 00:23:03.040 "psk": "key0", 00:23:03.040 "method": "nvmf_subsystem_add_host", 00:23:03.040 "req_id": 1 00:23:03.040 } 00:23:03.040 Got JSON-RPC error response 00:23:03.040 response: 00:23:03.040 { 00:23:03.040 "code": -32603, 00:23:03.040 "message": "Internal error" 00:23:03.040 } 00:23:03.040 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:03.040 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:03.040 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:03.040 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:03.040 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1288346 00:23:03.040 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1288346 ']' 00:23:03.040 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1288346 00:23:03.040 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:03.040 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:03.040 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1288346 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1288346' 00:23:03.298 killing process with pid 1288346 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1288346 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1288346 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.il7fgHUYpt 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1288776 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1288776 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1288776 ']' 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:03.298 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.298 [2024-12-07 09:57:32.006456] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:03.298 [2024-12-07 09:57:32.006501] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.557 [2024-12-07 09:57:32.063053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.557 [2024-12-07 09:57:32.103311] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.557 [2024-12-07 09:57:32.103351] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.557 [2024-12-07 09:57:32.103358] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.557 [2024-12-07 09:57:32.103364] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.557 [2024-12-07 09:57:32.103370] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.557 [2024-12-07 09:57:32.103387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.557 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.557 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:03.557 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:03.557 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:03.557 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.557 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.557 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.il7fgHUYpt 00:23:03.557 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.il7fgHUYpt 00:23:03.557 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:03.816 [2024-12-07 09:57:32.396796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.816 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:04.074 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:04.074 [2024-12-07 09:57:32.761739] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:04.074 [2024-12-07 09:57:32.761953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.074 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:04.332 malloc0 00:23:04.332 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:04.590 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.il7fgHUYpt 00:23:04.849 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:04.849 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:04.849 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1289036 00:23:04.849 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.849 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1289036 /var/tmp/bdevperf.sock 00:23:04.849 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1289036 ']' 00:23:04.849 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.849 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:04.849 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.849 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:04.849 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.849 [2024-12-07 09:57:33.565220] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:04.849 [2024-12-07 09:57:33.565277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289036 ] 00:23:05.108 [2024-12-07 09:57:33.615243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.108 [2024-12-07 09:57:33.655552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.108 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:05.108 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:05.108 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.il7fgHUYpt 00:23:05.365 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:05.624 [2024-12-07 09:57:34.092600] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.624 TLSTESTn1 00:23:05.624 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:05.882 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:05.882 "subsystems": [ 00:23:05.882 { 00:23:05.882 "subsystem": "keyring", 00:23:05.882 "config": [ 00:23:05.882 { 00:23:05.882 "method": "keyring_file_add_key", 00:23:05.882 "params": { 00:23:05.882 "name": "key0", 00:23:05.882 "path": "/tmp/tmp.il7fgHUYpt" 00:23:05.882 } 00:23:05.882 } 00:23:05.882 ] 00:23:05.882 }, 00:23:05.882 { 00:23:05.882 "subsystem": "iobuf", 00:23:05.882 "config": [ 00:23:05.882 { 00:23:05.882 "method": "iobuf_set_options", 00:23:05.882 "params": { 00:23:05.882 "small_pool_count": 8192, 00:23:05.882 "large_pool_count": 1024, 00:23:05.882 "small_bufsize": 8192, 00:23:05.882 "large_bufsize": 135168 00:23:05.882 } 00:23:05.882 } 00:23:05.882 ] 00:23:05.882 }, 00:23:05.882 { 00:23:05.882 "subsystem": "sock", 00:23:05.882 "config": [ 00:23:05.882 { 00:23:05.882 "method": "sock_set_default_impl", 00:23:05.882 "params": { 00:23:05.882 "impl_name": "posix" 00:23:05.882 } 00:23:05.882 }, 00:23:05.882 { 00:23:05.882 "method": "sock_impl_set_options", 00:23:05.882 "params": { 00:23:05.882 "impl_name": "ssl", 00:23:05.882 "recv_buf_size": 4096, 00:23:05.882 "send_buf_size": 4096, 00:23:05.882 "enable_recv_pipe": true, 00:23:05.882 "enable_quickack": false, 00:23:05.882 "enable_placement_id": 0, 00:23:05.882 "enable_zerocopy_send_server": true, 00:23:05.882 "enable_zerocopy_send_client": false, 00:23:05.882 "zerocopy_threshold": 0, 00:23:05.882 "tls_version": 0, 00:23:05.882 "enable_ktls": false 00:23:05.882 } 00:23:05.882 }, 00:23:05.882 { 00:23:05.882 "method": "sock_impl_set_options", 00:23:05.882 "params": { 00:23:05.882 "impl_name": "posix", 00:23:05.882 "recv_buf_size": 2097152, 00:23:05.882 "send_buf_size": 2097152, 00:23:05.882 "enable_recv_pipe": true, 00:23:05.882 "enable_quickack": false, 00:23:05.882 "enable_placement_id": 0, 00:23:05.882 "enable_zerocopy_send_server": true, 00:23:05.882 "enable_zerocopy_send_client": false, 00:23:05.882 "zerocopy_threshold": 0, 00:23:05.882 "tls_version": 0, 00:23:05.882 "enable_ktls": false 00:23:05.882 } 00:23:05.882 } 00:23:05.882 ] 00:23:05.882 }, 00:23:05.882 { 00:23:05.882 "subsystem": "vmd", 00:23:05.882 "config": [] 00:23:05.882 }, 00:23:05.882 { 00:23:05.882 "subsystem": "accel", 00:23:05.882 "config": [ 00:23:05.882 { 00:23:05.882 "method": "accel_set_options", 00:23:05.882 "params": { 00:23:05.882 "small_cache_size": 128, 00:23:05.882 "large_cache_size": 16, 00:23:05.882 "task_count": 2048, 00:23:05.882 "sequence_count": 2048, 00:23:05.882 "buf_count": 2048 00:23:05.882 } 00:23:05.882 } 00:23:05.882 ] 00:23:05.882 }, 00:23:05.882 { 00:23:05.882 "subsystem": "bdev", 00:23:05.882 "config": [ 00:23:05.882 { 00:23:05.882 "method": "bdev_set_options", 00:23:05.882 "params": { 00:23:05.882 "bdev_io_pool_size": 65535, 00:23:05.883 "bdev_io_cache_size": 256, 00:23:05.883 "bdev_auto_examine": true, 00:23:05.883 "iobuf_small_cache_size": 128, 00:23:05.883 "iobuf_large_cache_size": 16 00:23:05.883 } 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "method": "bdev_raid_set_options", 00:23:05.883 "params": { 00:23:05.883 "process_window_size_kb": 1024, 00:23:05.883 "process_max_bandwidth_mb_sec": 0 00:23:05.883 } 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "method": "bdev_iscsi_set_options", 00:23:05.883 "params": { 00:23:05.883 "timeout_sec": 30 00:23:05.883 } 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "method": "bdev_nvme_set_options", 00:23:05.883 "params": { 00:23:05.883 "action_on_timeout": "none", 00:23:05.883 "timeout_us": 0, 00:23:05.883 "timeout_admin_us": 0, 00:23:05.883 "keep_alive_timeout_ms": 10000, 00:23:05.883 "arbitration_burst": 0, 00:23:05.883 "low_priority_weight": 0, 00:23:05.883 "medium_priority_weight": 0, 00:23:05.883 "high_priority_weight": 0, 00:23:05.883 "nvme_adminq_poll_period_us": 10000, 00:23:05.883 "nvme_ioq_poll_period_us": 0, 00:23:05.883 "io_queue_requests": 0, 00:23:05.883 "delay_cmd_submit": true, 00:23:05.883 "transport_retry_count": 4, 00:23:05.883 "bdev_retry_count": 3, 00:23:05.883 "transport_ack_timeout": 0, 00:23:05.883 "ctrlr_loss_timeout_sec": 0, 00:23:05.883 "reconnect_delay_sec": 0, 00:23:05.883 "fast_io_fail_timeout_sec": 0, 00:23:05.883 "disable_auto_failback": false, 00:23:05.883 "generate_uuids": false, 00:23:05.883 "transport_tos": 0, 00:23:05.883 "nvme_error_stat": false, 00:23:05.883 "rdma_srq_size": 0, 00:23:05.883 "io_path_stat": false, 00:23:05.883 "allow_accel_sequence": false, 00:23:05.883 "rdma_max_cq_size": 0, 00:23:05.883 "rdma_cm_event_timeout_ms": 0, 00:23:05.883 "dhchap_digests": [ 00:23:05.883 "sha256", 00:23:05.883 "sha384", 00:23:05.883 "sha512" 00:23:05.883 ], 00:23:05.883 "dhchap_dhgroups": [ 00:23:05.883 "null", 00:23:05.883 "ffdhe2048", 00:23:05.883 "ffdhe3072", 00:23:05.883 "ffdhe4096", 00:23:05.883 "ffdhe6144", 00:23:05.883 "ffdhe8192" 00:23:05.883 ] 00:23:05.883 } 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "method": "bdev_nvme_set_hotplug", 00:23:05.883 "params": { 00:23:05.883 "period_us": 100000, 00:23:05.883 "enable": false 00:23:05.883 } 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "method": "bdev_malloc_create", 00:23:05.883 "params": { 00:23:05.883 "name": "malloc0", 00:23:05.883 "num_blocks": 8192, 00:23:05.883 "block_size": 4096, 00:23:05.883 "physical_block_size": 4096, 00:23:05.883 "uuid": "cf48bbae-04de-417f-88e5-f7627d8720d1", 00:23:05.883 "optimal_io_boundary": 0, 00:23:05.883 "md_size": 0, 00:23:05.883 "dif_type": 0, 00:23:05.883 "dif_is_head_of_md": false, 00:23:05.883 "dif_pi_format": 0 00:23:05.883 } 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "method": "bdev_wait_for_examine" 00:23:05.883 } 00:23:05.883 ] 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "subsystem": "nbd", 00:23:05.883 "config": [] 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "subsystem": "scheduler", 00:23:05.883 "config": [ 00:23:05.883 { 00:23:05.883 "method": "framework_set_scheduler", 00:23:05.883 "params": { 00:23:05.883 "name": "static" 00:23:05.883 } 00:23:05.883 } 00:23:05.883 ] 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "subsystem": "nvmf", 00:23:05.883 "config": [ 00:23:05.883 { 00:23:05.883 "method": "nvmf_set_config", 00:23:05.883 "params": { 00:23:05.883 "discovery_filter": "match_any", 00:23:05.883 "admin_cmd_passthru": { 00:23:05.883 "identify_ctrlr": false 00:23:05.883 }, 00:23:05.883 "dhchap_digests": [ 00:23:05.883 "sha256", 00:23:05.883 "sha384", 00:23:05.883 "sha512" 00:23:05.883 ], 00:23:05.883 "dhchap_dhgroups": [ 00:23:05.883 "null", 00:23:05.883 "ffdhe2048", 00:23:05.883 "ffdhe3072", 00:23:05.883 "ffdhe4096", 00:23:05.883 "ffdhe6144", 00:23:05.883 "ffdhe8192" 00:23:05.883 ] 00:23:05.883 } 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "method": "nvmf_set_max_subsystems", 00:23:05.883 "params": { 00:23:05.883 "max_subsystems": 1024 00:23:05.883 } 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "method": "nvmf_set_crdt", 00:23:05.883 "params": { 00:23:05.883 "crdt1": 0, 00:23:05.883 "crdt2": 0, 00:23:05.883 "crdt3": 0 00:23:05.883 } 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "method": "nvmf_create_transport", 00:23:05.883 "params": { 00:23:05.883 "trtype": "TCP", 00:23:05.883 "max_queue_depth": 128, 00:23:05.883 "max_io_qpairs_per_ctrlr": 127, 00:23:05.883 "in_capsule_data_size": 4096, 00:23:05.883 "max_io_size": 131072, 00:23:05.883 "io_unit_size": 131072, 00:23:05.883 "max_aq_depth": 128, 00:23:05.883 "num_shared_buffers": 511, 00:23:05.883 "buf_cache_size": 4294967295, 00:23:05.883 "dif_insert_or_strip": false, 00:23:05.883 "zcopy": false, 00:23:05.883 "c2h_success": false, 00:23:05.883 "sock_priority": 0, 00:23:05.883 "abort_timeout_sec": 1, 00:23:05.883 "ack_timeout": 0, 00:23:05.883 "data_wr_pool_size": 0 00:23:05.883 } 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "method": "nvmf_create_subsystem", 00:23:05.883 "params": { 00:23:05.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.883 "allow_any_host": false, 00:23:05.883 "serial_number": "SPDK00000000000001", 00:23:05.883 "model_number": "SPDK bdev Controller", 00:23:05.883 "max_namespaces": 10, 00:23:05.883 "min_cntlid": 1, 00:23:05.883 "max_cntlid": 65519, 00:23:05.883 "ana_reporting": false 00:23:05.883 } 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "method": "nvmf_subsystem_add_host", 00:23:05.883 "params": { 00:23:05.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.883 "host": "nqn.2016-06.io.spdk:host1", 00:23:05.884 "psk": "key0" 00:23:05.884 } 00:23:05.884 }, 00:23:05.884 { 00:23:05.884 "method": "nvmf_subsystem_add_ns", 00:23:05.884 "params": { 00:23:05.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.884 "namespace": { 00:23:05.884 "nsid": 1, 00:23:05.884 "bdev_name": "malloc0", 00:23:05.884 "nguid": "CF48BBAE04DE417F88E5F7627D8720D1", 00:23:05.884 "uuid": "cf48bbae-04de-417f-88e5-f7627d8720d1", 00:23:05.884 "no_auto_visible": false 00:23:05.884 } 00:23:05.884 } 00:23:05.884 }, 00:23:05.884 { 00:23:05.884 "method": "nvmf_subsystem_add_listener", 00:23:05.884 "params": { 00:23:05.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.884 "listen_address": { 00:23:05.884 "trtype": "TCP", 00:23:05.884 "adrfam": "IPv4", 00:23:05.884 "traddr": "10.0.0.2", 00:23:05.884 "trsvcid": "4420" 00:23:05.884 }, 00:23:05.884 "secure_channel": true 00:23:05.884 } 00:23:05.884 } 00:23:05.884 ] 00:23:05.884 } 00:23:05.884 ] 00:23:05.884 }' 00:23:05.884 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:06.143 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:06.143 "subsystems": [ 00:23:06.143 { 00:23:06.143 "subsystem": "keyring", 00:23:06.143 "config": [ 00:23:06.143 { 00:23:06.143 "method": "keyring_file_add_key", 00:23:06.143 "params": { 00:23:06.143 "name": "key0", 00:23:06.143 "path": "/tmp/tmp.il7fgHUYpt" 00:23:06.143 } 00:23:06.143 } 00:23:06.143 ] 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "subsystem": "iobuf", 00:23:06.143 "config": [ 00:23:06.143 { 00:23:06.143 "method": "iobuf_set_options", 00:23:06.143 "params": { 00:23:06.143 "small_pool_count": 8192, 00:23:06.143 "large_pool_count": 1024, 00:23:06.143 "small_bufsize": 8192, 00:23:06.143 "large_bufsize": 135168 00:23:06.143 } 00:23:06.143 } 00:23:06.143 ] 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "subsystem": "sock", 00:23:06.143 "config": [ 00:23:06.143 { 00:23:06.143 "method": "sock_set_default_impl", 00:23:06.143 "params": { 00:23:06.143 "impl_name": "posix" 00:23:06.143 } 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "method": "sock_impl_set_options", 00:23:06.143 "params": { 00:23:06.143 "impl_name": "ssl", 00:23:06.143 "recv_buf_size": 4096, 00:23:06.143 "send_buf_size": 4096, 00:23:06.143 "enable_recv_pipe": true, 00:23:06.143 "enable_quickack": false, 00:23:06.143 "enable_placement_id": 0, 00:23:06.143 "enable_zerocopy_send_server": true, 00:23:06.143 "enable_zerocopy_send_client": false, 00:23:06.143 "zerocopy_threshold": 0, 00:23:06.143 "tls_version": 0, 00:23:06.143 "enable_ktls": false 00:23:06.143 } 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "method": "sock_impl_set_options", 00:23:06.143 "params": { 00:23:06.143 "impl_name": "posix", 00:23:06.143 "recv_buf_size": 2097152, 00:23:06.143 "send_buf_size": 2097152, 00:23:06.143 "enable_recv_pipe": true, 00:23:06.143 "enable_quickack": false, 00:23:06.143 "enable_placement_id": 0, 00:23:06.143 "enable_zerocopy_send_server": true, 00:23:06.143 "enable_zerocopy_send_client": false, 00:23:06.143 "zerocopy_threshold": 0, 00:23:06.143 "tls_version": 0, 00:23:06.143 "enable_ktls": false 00:23:06.143 } 00:23:06.143 } 00:23:06.143 ] 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "subsystem": "vmd", 00:23:06.143 "config": [] 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "subsystem": "accel", 00:23:06.143 "config": [ 00:23:06.143 { 00:23:06.143 "method": "accel_set_options", 00:23:06.143 "params": { 00:23:06.143 "small_cache_size": 128, 00:23:06.143 "large_cache_size": 16, 00:23:06.143 "task_count": 2048, 00:23:06.143 "sequence_count": 2048, 00:23:06.143 "buf_count": 2048 00:23:06.143 } 00:23:06.143 } 00:23:06.143 ] 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "subsystem": "bdev", 00:23:06.143 "config": [ 00:23:06.143 { 00:23:06.143 "method": "bdev_set_options", 00:23:06.143 "params": { 00:23:06.143 "bdev_io_pool_size": 65535, 00:23:06.143 "bdev_io_cache_size": 256, 00:23:06.143 "bdev_auto_examine": true, 00:23:06.143 "iobuf_small_cache_size": 128, 00:23:06.143 "iobuf_large_cache_size": 16 00:23:06.143 } 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "method": "bdev_raid_set_options", 00:23:06.143 "params": { 00:23:06.143 "process_window_size_kb": 1024, 00:23:06.143 "process_max_bandwidth_mb_sec": 0 00:23:06.143 } 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "method": "bdev_iscsi_set_options", 00:23:06.143 "params": { 00:23:06.143 "timeout_sec": 30 00:23:06.143 } 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "method": "bdev_nvme_set_options", 00:23:06.143 "params": { 00:23:06.143 "action_on_timeout": "none", 00:23:06.143 "timeout_us": 0, 00:23:06.143 "timeout_admin_us": 0, 00:23:06.143 "keep_alive_timeout_ms": 10000, 00:23:06.143 "arbitration_burst": 0, 00:23:06.143 "low_priority_weight": 0, 00:23:06.143 "medium_priority_weight": 0, 00:23:06.143 "high_priority_weight": 0, 00:23:06.143 "nvme_adminq_poll_period_us": 10000, 00:23:06.143 "nvme_ioq_poll_period_us": 0, 00:23:06.143 "io_queue_requests": 512, 00:23:06.143 "delay_cmd_submit": true, 00:23:06.143 "transport_retry_count": 4, 00:23:06.143 "bdev_retry_count": 3, 00:23:06.143 "transport_ack_timeout": 0, 00:23:06.143 "ctrlr_loss_timeout_sec": 0, 00:23:06.143 "reconnect_delay_sec": 0, 00:23:06.143 "fast_io_fail_timeout_sec": 0, 00:23:06.143 "disable_auto_failback": false, 00:23:06.143 "generate_uuids": false, 00:23:06.143 "transport_tos": 0, 00:23:06.143 "nvme_error_stat": false, 00:23:06.143 "rdma_srq_size": 0, 00:23:06.143 "io_path_stat": false, 00:23:06.143 "allow_accel_sequence": false, 00:23:06.143 "rdma_max_cq_size": 0, 00:23:06.143 "rdma_cm_event_timeout_ms": 0, 00:23:06.143 "dhchap_digests": [ 00:23:06.143 "sha256", 00:23:06.143 "sha384", 00:23:06.143 "sha512" 00:23:06.143 ], 00:23:06.143 "dhchap_dhgroups": [ 00:23:06.143 "null", 00:23:06.143 "ffdhe2048", 00:23:06.143 "ffdhe3072", 00:23:06.143 "ffdhe4096", 00:23:06.143 "ffdhe6144", 00:23:06.143 "ffdhe8192" 00:23:06.143 ] 00:23:06.143 } 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "method": "bdev_nvme_attach_controller", 00:23:06.143 "params": { 00:23:06.143 "name": "TLSTEST", 00:23:06.143 "trtype": "TCP", 00:23:06.143 "adrfam": "IPv4", 00:23:06.143 "traddr": "10.0.0.2", 00:23:06.143 "trsvcid": "4420", 00:23:06.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.143 "prchk_reftag": false, 00:23:06.143 "prchk_guard": false, 00:23:06.143 "ctrlr_loss_timeout_sec": 0, 00:23:06.143 "reconnect_delay_sec": 0, 00:23:06.143 "fast_io_fail_timeout_sec": 0, 00:23:06.143 "psk": "key0", 00:23:06.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.143 "hdgst": false, 00:23:06.143 "ddgst": false 00:23:06.143 } 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "method": "bdev_nvme_set_hotplug", 00:23:06.143 "params": { 00:23:06.143 "period_us": 100000, 00:23:06.143 "enable": false 00:23:06.143 } 00:23:06.143 }, 00:23:06.143 { 00:23:06.143 "method": "bdev_wait_for_examine" 00:23:06.143 } 00:23:06.143 ] 00:23:06.143 }, 00:23:06.143 { 00:23:06.144 "subsystem": "nbd", 00:23:06.144 "config": [] 00:23:06.144 } 00:23:06.144 ] 00:23:06.144 }' 00:23:06.144 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1289036 00:23:06.144 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1289036 ']' 00:23:06.144 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1289036 00:23:06.144 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:06.144 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:06.144 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1289036 00:23:06.144 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:06.144 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:06.144 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1289036' 00:23:06.144 killing process with pid 1289036 00:23:06.144 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1289036 00:23:06.144 Received shutdown signal, test time was about 10.000000 seconds 00:23:06.144 00:23:06.144 Latency(us) 00:23:06.144 [2024-12-07T08:57:34.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.144 [2024-12-07T08:57:34.870Z] =================================================================================================================== 00:23:06.144 [2024-12-07T08:57:34.870Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:06.144 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1289036 00:23:06.402 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1288776 00:23:06.402 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1288776 ']' 00:23:06.402 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1288776 00:23:06.402 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:06.402 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:06.402 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1288776 00:23:06.402 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:06.402 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:06.402 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1288776' 00:23:06.402 killing process with pid 1288776 00:23:06.402 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1288776 00:23:06.402 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1288776 00:23:06.660 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:06.660 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:06.660 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:06.660 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:06.660 "subsystems": [ 00:23:06.660 { 00:23:06.660 "subsystem": "keyring", 00:23:06.660 "config": [ 00:23:06.660 { 00:23:06.660 "method": "keyring_file_add_key", 00:23:06.660 "params": { 00:23:06.660 "name": "key0", 00:23:06.660 "path": "/tmp/tmp.il7fgHUYpt" 00:23:06.660 } 00:23:06.660 } 00:23:06.660 ] 00:23:06.660 }, 00:23:06.660 { 00:23:06.660 "subsystem": "iobuf", 00:23:06.660 "config": [ 00:23:06.660 { 00:23:06.660 "method": "iobuf_set_options", 00:23:06.660 "params": { 00:23:06.660 "small_pool_count": 8192, 00:23:06.660 "large_pool_count": 1024, 00:23:06.660 "small_bufsize": 8192, 00:23:06.660 "large_bufsize": 135168 00:23:06.660 } 00:23:06.660 } 00:23:06.660 ] 00:23:06.660 }, 00:23:06.660 { 00:23:06.660 "subsystem": "sock", 00:23:06.660 "config": [ 00:23:06.660 { 00:23:06.660 "method": "sock_set_default_impl", 00:23:06.660 "params": { 00:23:06.660 "impl_name": "posix" 00:23:06.660 } 00:23:06.660 }, 00:23:06.660 { 00:23:06.660 "method": "sock_impl_set_options", 00:23:06.660 "params": { 00:23:06.660 "impl_name": "ssl", 00:23:06.660 "recv_buf_size": 4096, 00:23:06.660 "send_buf_size": 4096, 00:23:06.660 "enable_recv_pipe": true, 00:23:06.660 "enable_quickack": false, 00:23:06.660 "enable_placement_id": 0, 00:23:06.660 "enable_zerocopy_send_server": true, 00:23:06.660 "enable_zerocopy_send_client": false, 00:23:06.660 "zerocopy_threshold": 0, 00:23:06.660 "tls_version": 0, 00:23:06.660 "enable_ktls": false 00:23:06.660 } 00:23:06.660 }, 00:23:06.660 { 00:23:06.660 "method": "sock_impl_set_options", 00:23:06.660 "params": { 00:23:06.660 "impl_name": "posix", 00:23:06.660 "recv_buf_size": 2097152, 00:23:06.660 "send_buf_size": 2097152, 00:23:06.660 "enable_recv_pipe": true, 00:23:06.660 "enable_quickack": false, 00:23:06.660 "enable_placement_id": 0, 00:23:06.660 "enable_zerocopy_send_server": true, 00:23:06.660 "enable_zerocopy_send_client": false, 00:23:06.660 "zerocopy_threshold": 0, 00:23:06.660 "tls_version": 0, 00:23:06.660 "enable_ktls": false 00:23:06.660 } 00:23:06.660 } 00:23:06.660 ] 00:23:06.660 }, 00:23:06.660 { 00:23:06.660 "subsystem": "vmd", 00:23:06.660 "config": [] 00:23:06.660 }, 00:23:06.660 { 00:23:06.660 "subsystem": "accel", 00:23:06.660 "config": [ 00:23:06.660 { 00:23:06.660 "method": "accel_set_options", 00:23:06.660 "params": { 00:23:06.660 "small_cache_size": 128, 00:23:06.660 "large_cache_size": 16, 00:23:06.660 "task_count": 2048, 00:23:06.660 "sequence_count": 2048, 00:23:06.660 "buf_count": 2048 00:23:06.660 } 00:23:06.660 } 00:23:06.660 ] 00:23:06.660 }, 00:23:06.660 { 00:23:06.660 "subsystem": "bdev", 00:23:06.660 "config": [ 00:23:06.660 { 00:23:06.660 "method": "bdev_set_options", 00:23:06.660 "params": { 00:23:06.660 "bdev_io_pool_size": 65535, 00:23:06.660 "bdev_io_cache_size": 256, 00:23:06.660 "bdev_auto_examine": true, 00:23:06.660 "iobuf_small_cache_size": 128, 00:23:06.660 "iobuf_large_cache_size": 16 00:23:06.660 } 00:23:06.660 }, 00:23:06.660 { 00:23:06.660 "method": "bdev_raid_set_options", 00:23:06.660 "params": { 00:23:06.660 "process_window_size_kb": 1024, 00:23:06.660 "process_max_bandwidth_mb_sec": 0 00:23:06.660 } 00:23:06.660 }, 00:23:06.660 { 00:23:06.660 "method": "bdev_iscsi_set_options", 00:23:06.660 "params": { 00:23:06.660 "timeout_sec": 30 00:23:06.660 } 00:23:06.660 }, 00:23:06.660 { 00:23:06.660 "method": "bdev_nvme_set_options", 00:23:06.660 "params": { 00:23:06.660 "action_on_timeout": "none", 00:23:06.660 "timeout_us": 0, 00:23:06.660 "timeout_admin_us": 0, 00:23:06.660 "keep_alive_timeout_ms": 10000, 00:23:06.660 "arbitration_burst": 0, 00:23:06.660 "low_priority_weight": 0, 00:23:06.660 "medium_priority_weight": 0, 00:23:06.660 "high_priority_weight": 0, 00:23:06.660 "nvme_adminq_poll_period_us": 10000, 00:23:06.660 "nvme_ioq_poll_period_us": 0, 00:23:06.660 "io_queue_requests": 0, 00:23:06.660 "delay_cmd_submit": true, 00:23:06.660 "transport_retry_count": 4, 00:23:06.660 "bdev_retry_count": 3, 00:23:06.660 "transport_ack_timeout": 0, 00:23:06.660 "ctrlr_loss_timeout_sec": 0, 00:23:06.660 "reconnect_delay_sec": 0, 00:23:06.660 "fast_io_fail_timeout_sec": 0, 00:23:06.660 "disable_auto_failback": false, 00:23:06.660 "generate_uuids": false, 00:23:06.660 "transport_tos": 0, 00:23:06.660 "nvme_error_stat": false, 00:23:06.660 "rdma_srq_size": 0, 00:23:06.660 "io_path_stat": false, 00:23:06.660 "allow_accel_sequence": false, 00:23:06.660 "rdma_max_cq_size": 0, 00:23:06.660 "rdma_cm_event_timeout_ms": 0, 00:23:06.660 "dhchap_digests": [ 00:23:06.660 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.660 "sha256", 00:23:06.660 "sha384", 00:23:06.660 "sha512" 00:23:06.660 ], 00:23:06.660 "dhchap_dhgroups": [ 00:23:06.660 "null", 00:23:06.660 "ffdhe2048", 00:23:06.660 "ffdhe3072", 00:23:06.661 "ffdhe4096", 00:23:06.661 "ffdhe6144", 00:23:06.661 "ffdhe8192" 00:23:06.661 ] 00:23:06.661 } 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "method": "bdev_nvme_set_hotplug", 00:23:06.661 "params": { 00:23:06.661 "period_us": 100000, 00:23:06.661 "enable": false 00:23:06.661 } 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "method": "bdev_malloc_create", 00:23:06.661 "params": { 00:23:06.661 "name": "malloc0", 00:23:06.661 "num_blocks": 8192, 00:23:06.661 "block_size": 4096, 00:23:06.661 "physical_block_size": 4096, 00:23:06.661 "uuid": "cf48bbae-04de-417f-88e5-f7627d8720d1", 00:23:06.661 "optimal_io_boundary": 0, 00:23:06.661 "md_size": 0, 00:23:06.661 "dif_type": 0, 00:23:06.661 "dif_is_head_of_md": false, 00:23:06.661 "dif_pi_format": 0 00:23:06.661 } 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "method": "bdev_wait_for_examine" 00:23:06.661 } 00:23:06.661 ] 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "subsystem": "nbd", 00:23:06.661 "config": [] 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "subsystem": "scheduler", 00:23:06.661 "config": [ 00:23:06.661 { 00:23:06.661 "method": "framework_set_scheduler", 00:23:06.661 "params": { 00:23:06.661 "name": "static" 00:23:06.661 } 00:23:06.661 } 00:23:06.661 ] 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "subsystem": "nvmf", 00:23:06.661 "config": [ 00:23:06.661 { 00:23:06.661 "method": "nvmf_set_config", 00:23:06.661 "params": { 00:23:06.661 "discovery_filter": "match_any", 00:23:06.661 "admin_cmd_passthru": { 00:23:06.661 "identify_ctrlr": false 00:23:06.661 }, 00:23:06.661 "dhchap_digests": [ 00:23:06.661 "sha256", 00:23:06.661 "sha384", 00:23:06.661 "sha512" 00:23:06.661 ], 00:23:06.661 "dhchap_dhgroups": [ 00:23:06.661 "null", 00:23:06.661 "ffdhe2048", 00:23:06.661 "ffdhe3072", 00:23:06.661 "ffdhe4096", 00:23:06.661 "ffdhe6144", 00:23:06.661 "ffdhe8192" 00:23:06.661 ] 00:23:06.661 } 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "method": "nvmf_set_max_subsystems", 00:23:06.661 "params": { 00:23:06.661 "max_subsystems": 1024 00:23:06.661 } 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "method": "nvmf_set_crdt", 00:23:06.661 "params": { 00:23:06.661 "crdt1": 0, 00:23:06.661 "crdt2": 0, 00:23:06.661 "crdt3": 0 00:23:06.661 } 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "method": "nvmf_create_transport", 00:23:06.661 "params": { 00:23:06.661 "trtype": "TCP", 00:23:06.661 "max_queue_depth": 128, 00:23:06.661 "max_io_qpairs_per_ctrlr": 127, 00:23:06.661 "in_capsule_data_size": 4096, 00:23:06.661 "max_io_size": 131072, 00:23:06.661 "io_unit_size": 131072, 00:23:06.661 "max_aq_depth": 128, 00:23:06.661 "num_shared_buffers": 511, 00:23:06.661 "buf_cache_size": 4294967295, 00:23:06.661 "dif_insert_or_strip": false, 00:23:06.661 "zcopy": false, 00:23:06.661 "c2h_success": false, 00:23:06.661 "sock_priority": 0, 00:23:06.661 "abort_timeout_sec": 1, 00:23:06.661 "ack_timeout": 0, 00:23:06.661 "data_wr_pool_size": 0 00:23:06.661 } 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "method": "nvmf_create_subsystem", 00:23:06.661 "params": { 00:23:06.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.661 "allow_any_host": false, 00:23:06.661 "serial_number": "SPDK00000000000001", 00:23:06.661 "model_number": "SPDK bdev Controller", 00:23:06.661 "max_namespaces": 10, 00:23:06.661 "min_cntlid": 1, 00:23:06.661 "max_cntlid": 65519, 00:23:06.661 "ana_reporting": false 00:23:06.661 } 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "method": "nvmf_subsystem_add_host", 00:23:06.661 "params": { 00:23:06.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.661 "host": "nqn.2016-06.io.spdk:host1", 00:23:06.661 "psk": "key0" 00:23:06.661 } 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "method": "nvmf_subsystem_add_ns", 00:23:06.661 "params": { 00:23:06.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.661 "namespace": { 00:23:06.661 "nsid": 1, 00:23:06.661 "bdev_name": "malloc0", 00:23:06.661 "nguid": "CF48BBAE04DE417F88E5F7627D8720D1", 00:23:06.661 "uuid": "cf48bbae-04de-417f-88e5-f7627d8720d1", 00:23:06.661 "no_auto_visible": false 00:23:06.661 } 00:23:06.661 } 00:23:06.661 }, 00:23:06.661 { 00:23:06.661 "method": "nvmf_subsystem_add_listener", 00:23:06.661 "params": { 00:23:06.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.661 "listen_address": { 00:23:06.661 "trtype": "TCP", 00:23:06.661 "adrfam": "IPv4", 00:23:06.661 "traddr": "10.0.0.2", 00:23:06.661 "trsvcid": "4420" 00:23:06.661 }, 00:23:06.661 "secure_channel": true 00:23:06.661 } 00:23:06.661 } 00:23:06.661 ] 00:23:06.661 } 00:23:06.661 ] 00:23:06.661 }' 00:23:06.661 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1289281 00:23:06.661 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1289281 00:23:06.661 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:06.661 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1289281 ']' 00:23:06.661 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.661 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:06.661 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.661 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:06.661 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.661 [2024-12-07 09:57:35.229330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:06.661 [2024-12-07 09:57:35.229378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.661 [2024-12-07 09:57:35.288791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.661 [2024-12-07 09:57:35.328551] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.661 [2024-12-07 09:57:35.328590] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.661 [2024-12-07 09:57:35.328601] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.661 [2024-12-07 09:57:35.328608] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.661 [2024-12-07 09:57:35.328614] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.661 [2024-12-07 09:57:35.328670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.919 [2024-12-07 09:57:35.547545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.919 [2024-12-07 09:57:35.579507] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.919 [2024-12-07 09:57:35.579710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1289529 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1289529 /var/tmp/bdevperf.sock 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1289529 ']' 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.484 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:07.484 "subsystems": [ 00:23:07.484 { 00:23:07.484 "subsystem": "keyring", 00:23:07.484 "config": [ 00:23:07.484 { 00:23:07.484 "method": "keyring_file_add_key", 00:23:07.484 "params": { 00:23:07.484 "name": "key0", 00:23:07.484 "path": "/tmp/tmp.il7fgHUYpt" 00:23:07.484 } 00:23:07.484 } 00:23:07.484 ] 00:23:07.484 }, 00:23:07.484 { 00:23:07.484 "subsystem": "iobuf", 00:23:07.484 "config": [ 00:23:07.484 { 00:23:07.484 "method": "iobuf_set_options", 00:23:07.484 "params": { 00:23:07.484 "small_pool_count": 8192, 00:23:07.484 "large_pool_count": 1024, 00:23:07.484 "small_bufsize": 8192, 00:23:07.484 "large_bufsize": 135168 00:23:07.484 } 00:23:07.484 } 00:23:07.484 ] 00:23:07.484 }, 00:23:07.484 { 00:23:07.484 "subsystem": "sock", 00:23:07.484 "config": [ 00:23:07.484 { 00:23:07.484 "method": "sock_set_default_impl", 00:23:07.484 "params": { 00:23:07.484 "impl_name": "posix" 00:23:07.484 } 00:23:07.484 }, 00:23:07.484 { 00:23:07.484 "method": "sock_impl_set_options", 00:23:07.484 "params": { 00:23:07.484 "impl_name": "ssl", 00:23:07.484 "recv_buf_size": 4096, 00:23:07.484 "send_buf_size": 4096, 00:23:07.484 "enable_recv_pipe": true, 00:23:07.484 "enable_quickack": false, 00:23:07.484 "enable_placement_id": 0, 00:23:07.484 "enable_zerocopy_send_server": true, 00:23:07.484 "enable_zerocopy_send_client": false, 00:23:07.484 "zerocopy_threshold": 0, 00:23:07.484 "tls_version": 0, 00:23:07.484 "enable_ktls": false 00:23:07.484 } 00:23:07.484 }, 00:23:07.484 { 00:23:07.484 "method": "sock_impl_set_options", 00:23:07.484 "params": { 00:23:07.484 "impl_name": "posix", 00:23:07.484 "recv_buf_size": 2097152, 00:23:07.484 "send_buf_size": 2097152, 00:23:07.484 "enable_recv_pipe": true, 00:23:07.484 "enable_quickack": false, 00:23:07.484 "enable_placement_id": 0, 00:23:07.484 "enable_zerocopy_send_server": true, 00:23:07.484 "enable_zerocopy_send_client": false, 00:23:07.484 "zerocopy_threshold": 0, 00:23:07.484 "tls_version": 0, 00:23:07.484 "enable_ktls": false 00:23:07.484 } 00:23:07.484 } 00:23:07.484 ] 00:23:07.484 }, 00:23:07.484 { 00:23:07.484 "subsystem": "vmd", 00:23:07.484 "config": [] 00:23:07.484 }, 00:23:07.484 { 00:23:07.484 "subsystem": "accel", 00:23:07.484 "config": [ 00:23:07.484 { 00:23:07.484 "method": "accel_set_options", 00:23:07.484 "params": { 00:23:07.484 "small_cache_size": 128, 00:23:07.484 "large_cache_size": 16, 00:23:07.484 "task_count": 2048, 00:23:07.484 "sequence_count": 2048, 00:23:07.484 "buf_count": 2048 00:23:07.484 } 00:23:07.484 } 00:23:07.484 ] 00:23:07.484 }, 00:23:07.484 { 00:23:07.484 "subsystem": "bdev", 00:23:07.484 "config": [ 00:23:07.484 { 00:23:07.484 "method": "bdev_set_options", 00:23:07.484 "params": { 00:23:07.484 "bdev_io_pool_size": 65535, 00:23:07.484 "bdev_io_cache_size": 256, 00:23:07.484 "bdev_auto_examine": true, 00:23:07.484 "iobuf_small_cache_size": 128, 00:23:07.484 "iobuf_large_cache_size": 16 00:23:07.484 } 00:23:07.484 }, 00:23:07.484 { 00:23:07.484 "method": "bdev_raid_set_options", 00:23:07.484 "params": { 00:23:07.484 "process_window_size_kb": 1024, 00:23:07.484 "process_max_bandwidth_mb_sec": 0 00:23:07.484 } 00:23:07.484 }, 00:23:07.484 { 00:23:07.484 "method": "bdev_iscsi_set_options", 00:23:07.484 "params": { 00:23:07.484 "timeout_sec": 30 00:23:07.484 } 00:23:07.484 }, 00:23:07.484 { 00:23:07.484 "method": "bdev_nvme_set_options", 00:23:07.484 "params": { 00:23:07.484 "action_on_timeout": "none", 00:23:07.484 "timeout_us": 0, 00:23:07.484 "timeout_admin_us": 0, 00:23:07.484 "keep_alive_timeout_ms": 10000, 00:23:07.484 "arbitration_burst": 0, 00:23:07.484 "low_priority_weight": 0, 00:23:07.484 "medium_priority_weight": 0, 00:23:07.484 "high_priority_weight": 0, 00:23:07.484 "nvme_adminq_poll_period_us": 10000, 00:23:07.484 "nvme_ioq_poll_period_us": 0, 00:23:07.484 "io_queue_requests": 512, 00:23:07.484 "delay_cmd_submit": true, 00:23:07.484 "transport_retry_count": 4, 00:23:07.484 "bdev_retry_count": 3, 00:23:07.484 "transport_ack_timeout": 0, 00:23:07.484 "ctrlr_loss_timeout_sec": 0, 00:23:07.484 "reconnect_delay_sec": 0, 00:23:07.484 "fast_io_fail_timeout_sec": 0, 00:23:07.484 "disable_auto_failback": false, 00:23:07.484 "generate_uuids": false, 00:23:07.484 "transport_tos": 0, 00:23:07.484 "nvme_error_stat": false, 00:23:07.484 "rdma_srq_size": 0, 00:23:07.484 "io_path_stat": false, 00:23:07.484 "allow_accel_sequence": false, 00:23:07.484 "rdma_max_cq_size": 0, 00:23:07.484 "rdma_cm_event_timeout_ms": 0, 00:23:07.484 "dhchap_digests": [ 00:23:07.484 "sha256", 00:23:07.484 "sha384", 00:23:07.484 "sha512" 00:23:07.484 ], 00:23:07.484 "dhchap_dhgroups": [ 00:23:07.484 "null", 00:23:07.484 "ffdhe2048", 00:23:07.484 "ffdhe3072", 00:23:07.484 "ffdhe4096", 00:23:07.484 "ffdhe6144", 00:23:07.484 "ffdhe8192" 00:23:07.484 ] 00:23:07.484 } 00:23:07.484 }, 00:23:07.485 { 00:23:07.485 "method": "bdev_nvme_attach_controller", 00:23:07.485 "params": { 00:23:07.485 "name": "TLSTEST", 00:23:07.485 "trtype": "TCP", 00:23:07.485 "adrfam": "IPv4", 00:23:07.485 "traddr": "10.0.0.2", 00:23:07.485 "trsvcid": "4420", 00:23:07.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.485 "prchk_reftag": false, 00:23:07.485 "prchk_guard": false, 00:23:07.485 "ctrlr_loss_timeout_sec": 0, 00:23:07.485 "reconnect_delay_sec": 0, 00:23:07.485 "fast_io_fail_timeout_sec": 0, 00:23:07.485 "psk": "key0", 00:23:07.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.485 "hdgst": false, 00:23:07.485 "ddgst": false 00:23:07.485 } 00:23:07.485 }, 00:23:07.485 { 00:23:07.485 "method": "bdev_nvme_set_hotplug", 00:23:07.485 "params": { 00:23:07.485 "period_us": 100000, 00:23:07.485 "enable": false 00:23:07.485 } 00:23:07.485 }, 00:23:07.485 { 00:23:07.485 "method": "bdev_wait_for_examine" 00:23:07.485 } 00:23:07.485 ] 00:23:07.485 }, 00:23:07.485 { 00:23:07.485 "subsystem": "nbd", 00:23:07.485 "config": [] 00:23:07.485 } 00:23:07.485 ] 00:23:07.485 }' 00:23:07.485 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.485 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.485 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.485 [2024-12-07 09:57:36.156331] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:07.485 [2024-12-07 09:57:36.156380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289529 ] 00:23:07.485 [2024-12-07 09:57:36.206761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.742 [2024-12-07 09:57:36.247652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.743 [2024-12-07 09:57:36.395033] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.308 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:08.308 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:08.308 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:08.566 Running I/O for 10 seconds... 00:23:10.435 5341.00 IOPS, 20.86 MiB/s [2024-12-07T08:57:40.094Z] 5474.50 IOPS, 21.38 MiB/s [2024-12-07T08:57:41.469Z] 5517.33 IOPS, 21.55 MiB/s [2024-12-07T08:57:42.405Z] 5473.00 IOPS, 21.38 MiB/s [2024-12-07T08:57:43.339Z] 5444.40 IOPS, 21.27 MiB/s [2024-12-07T08:57:44.274Z] 5448.33 IOPS, 21.28 MiB/s [2024-12-07T08:57:45.210Z] 5452.57 IOPS, 21.30 MiB/s [2024-12-07T08:57:46.143Z] 5469.75 IOPS, 21.37 MiB/s [2024-12-07T08:57:47.519Z] 5495.78 IOPS, 21.47 MiB/s [2024-12-07T08:57:47.520Z] 5487.10 IOPS, 21.43 MiB/s 00:23:18.794 Latency(us) 00:23:18.794 [2024-12-07T08:57:47.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.794 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.794 Verification LBA range: start 0x0 length 0x2000 00:23:18.794 TLSTESTn1 : 10.02 5489.37 21.44 0.00 0.00 23279.42 4701.50 39891.48 00:23:18.794 [2024-12-07T08:57:47.520Z] =================================================================================================================== 00:23:18.794 [2024-12-07T08:57:47.520Z] Total : 5489.37 21.44 0.00 0.00 23279.42 4701.50 39891.48 00:23:18.794 { 00:23:18.794 "results": [ 00:23:18.794 { 00:23:18.794 "job": "TLSTESTn1", 00:23:18.794 "core_mask": "0x4", 00:23:18.794 "workload": "verify", 00:23:18.794 "status": "finished", 00:23:18.794 "verify_range": { 00:23:18.794 "start": 0, 00:23:18.794 "length": 8192 00:23:18.794 }, 00:23:18.794 "queue_depth": 128, 00:23:18.794 "io_size": 4096, 00:23:18.794 "runtime": 10.01919, 00:23:18.794 "iops": 5489.3659068248035, 00:23:18.794 "mibps": 21.44283557353439, 00:23:18.794 "io_failed": 0, 00:23:18.794 "io_timeout": 0, 00:23:18.794 "avg_latency_us": 23279.41893672375, 00:23:18.794 "min_latency_us": 4701.495652173913, 00:23:18.794 "max_latency_us": 39891.47826086957 00:23:18.794 } 00:23:18.794 ], 00:23:18.794 "core_count": 1 00:23:18.794 } 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1289529 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1289529 ']' 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1289529 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1289529 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1289529' 00:23:18.794 killing process with pid 1289529 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1289529 00:23:18.794 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.794 00:23:18.794 Latency(us) 00:23:18.794 [2024-12-07T08:57:47.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.794 [2024-12-07T08:57:47.520Z] =================================================================================================================== 00:23:18.794 [2024-12-07T08:57:47.520Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1289529 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1289281 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1289281 ']' 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1289281 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1289281 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1289281' 00:23:18.794 killing process with pid 1289281 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1289281 00:23:18.794 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1289281 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1291372 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1291372 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1291372 ']' 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.052 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.052 [2024-12-07 09:57:47.668708] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:19.052 [2024-12-07 09:57:47.668758] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.052 [2024-12-07 09:57:47.727653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.052 [2024-12-07 09:57:47.767650] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.052 [2024-12-07 09:57:47.767688] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.052 [2024-12-07 09:57:47.767695] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.052 [2024-12-07 09:57:47.767701] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.052 [2024-12-07 09:57:47.767706] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.052 [2024-12-07 09:57:47.767723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.310 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.310 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:19.310 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:19.310 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.310 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.310 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.310 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.il7fgHUYpt 00:23:19.310 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.il7fgHUYpt 00:23:19.310 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:19.568 [2024-12-07 09:57:48.065486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.568 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:19.568 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:19.826 [2024-12-07 09:57:48.442465] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:19.826 [2024-12-07 09:57:48.442702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.826 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:20.084 malloc0 00:23:20.084 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:20.342 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.il7fgHUYpt 00:23:20.342 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.606 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:20.606 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1291629 00:23:20.606 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.606 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1291629 /var/tmp/bdevperf.sock 00:23:20.606 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1291629 ']' 00:23:20.606 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.606 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.607 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.607 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.607 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.607 [2024-12-07 09:57:49.268474] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:20.607 [2024-12-07 09:57:49.268522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291629 ] 00:23:20.607 [2024-12-07 09:57:49.323484] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.866 [2024-12-07 09:57:49.365070] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.866 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.866 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:20.866 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.il7fgHUYpt 00:23:21.124 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:21.124 [2024-12-07 09:57:49.799061] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.381 nvme0n1 00:23:21.381 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:21.381 Running I/O for 1 seconds... 00:23:22.326 4682.00 IOPS, 18.29 MiB/s 00:23:22.326 Latency(us) 00:23:22.326 [2024-12-07T08:57:51.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.326 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:22.326 Verification LBA range: start 0x0 length 0x2000 00:23:22.326 nvme0n1 : 1.02 4723.50 18.45 0.00 0.00 26864.05 5356.86 51289.04 00:23:22.326 [2024-12-07T08:57:51.052Z] =================================================================================================================== 00:23:22.326 [2024-12-07T08:57:51.052Z] Total : 4723.50 18.45 0.00 0.00 26864.05 5356.86 51289.04 00:23:22.326 { 00:23:22.326 "results": [ 00:23:22.326 { 00:23:22.326 "job": "nvme0n1", 00:23:22.326 "core_mask": "0x2", 00:23:22.326 "workload": "verify", 00:23:22.326 "status": "finished", 00:23:22.326 "verify_range": { 00:23:22.326 "start": 0, 00:23:22.326 "length": 8192 00:23:22.326 }, 00:23:22.326 "queue_depth": 128, 00:23:22.326 "io_size": 4096, 00:23:22.326 "runtime": 1.018312, 00:23:22.326 "iops": 4723.503209232534, 00:23:22.326 "mibps": 18.451184411064585, 00:23:22.326 "io_failed": 0, 00:23:22.326 "io_timeout": 0, 00:23:22.326 "avg_latency_us": 26864.04936961041, 00:23:22.326 "min_latency_us": 5356.855652173913, 00:23:22.326 "max_latency_us": 51289.04347826087 00:23:22.326 } 00:23:22.326 ], 00:23:22.326 "core_count": 1 00:23:22.326 } 00:23:22.326 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1291629 00:23:22.326 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1291629 ']' 00:23:22.326 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1291629 00:23:22.326 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:22.326 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.326 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1291629 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1291629' 00:23:22.615 killing process with pid 1291629 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1291629 00:23:22.615 Received shutdown signal, test time was about 1.000000 seconds 00:23:22.615 00:23:22.615 Latency(us) 00:23:22.615 [2024-12-07T08:57:51.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.615 [2024-12-07T08:57:51.341Z] =================================================================================================================== 00:23:22.615 [2024-12-07T08:57:51.341Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1291629 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1291372 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1291372 ']' 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1291372 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1291372 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1291372' 00:23:22.615 killing process with pid 1291372 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1291372 00:23:22.615 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1291372 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1292016 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1292016 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1292016 ']' 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.897 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.897 [2024-12-07 09:57:51.540715] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:22.897 [2024-12-07 09:57:51.540762] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.897 [2024-12-07 09:57:51.598428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.170 [2024-12-07 09:57:51.641085] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.170 [2024-12-07 09:57:51.641124] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.170 [2024-12-07 09:57:51.641131] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.170 [2024-12-07 09:57:51.641137] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.170 [2024-12-07 09:57:51.641142] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.170 [2024-12-07 09:57:51.641180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.170 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:23.170 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:23.170 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:23.170 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:23.170 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.171 [2024-12-07 09:57:51.766408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.171 malloc0 00:23:23.171 [2024-12-07 09:57:51.804164] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:23.171 [2024-12-07 09:57:51.804380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1292124 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1292124 /var/tmp/bdevperf.sock 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1292124 ']' 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:23.171 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.171 [2024-12-07 09:57:51.880729] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:23.171 [2024-12-07 09:57:51.880771] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292124 ] 00:23:23.441 [2024-12-07 09:57:51.935062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.441 [2024-12-07 09:57:51.976273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.441 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:23.441 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:23.441 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.il7fgHUYpt 00:23:23.716 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:23.717 [2024-12-07 09:57:52.426204] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.989 nvme0n1 00:23:23.989 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:23.989 Running I/O for 1 seconds... 00:23:24.924 5359.00 IOPS, 20.93 MiB/s 00:23:24.924 Latency(us) 00:23:24.924 [2024-12-07T08:57:53.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.924 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:24.924 Verification LBA range: start 0x0 length 0x2000 00:23:24.924 nvme0n1 : 1.01 5408.12 21.13 0.00 0.00 23506.21 5755.77 28607.89 00:23:24.924 [2024-12-07T08:57:53.650Z] =================================================================================================================== 00:23:24.924 [2024-12-07T08:57:53.650Z] Total : 5408.12 21.13 0.00 0.00 23506.21 5755.77 28607.89 00:23:24.924 { 00:23:24.924 "results": [ 00:23:24.924 { 00:23:24.924 "job": "nvme0n1", 00:23:24.924 "core_mask": "0x2", 00:23:24.924 "workload": "verify", 00:23:24.924 "status": "finished", 00:23:24.924 "verify_range": { 00:23:24.924 "start": 0, 00:23:24.924 "length": 8192 00:23:24.924 }, 00:23:24.924 "queue_depth": 128, 00:23:24.924 "io_size": 4096, 00:23:24.924 "runtime": 1.014771, 00:23:24.924 "iops": 5408.116708104587, 00:23:24.924 "mibps": 21.125455891033543, 00:23:24.924 "io_failed": 0, 00:23:24.924 "io_timeout": 0, 00:23:24.924 "avg_latency_us": 23506.211403219673, 00:23:24.924 "min_latency_us": 5755.770434782608, 00:23:24.924 "max_latency_us": 28607.888695652175 00:23:24.924 } 00:23:24.924 ], 00:23:24.924 "core_count": 1 00:23:24.924 } 00:23:25.183 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:25.183 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.183 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.183 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.183 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:25.183 "subsystems": [ 00:23:25.183 { 00:23:25.183 "subsystem": "keyring", 00:23:25.183 "config": [ 00:23:25.183 { 00:23:25.183 "method": "keyring_file_add_key", 00:23:25.183 "params": { 00:23:25.183 "name": "key0", 00:23:25.183 "path": "/tmp/tmp.il7fgHUYpt" 00:23:25.183 } 00:23:25.183 } 00:23:25.183 ] 00:23:25.183 }, 00:23:25.183 { 00:23:25.183 "subsystem": "iobuf", 00:23:25.183 "config": [ 00:23:25.183 { 00:23:25.183 "method": "iobuf_set_options", 00:23:25.183 "params": { 00:23:25.183 "small_pool_count": 8192, 00:23:25.183 "large_pool_count": 1024, 00:23:25.183 "small_bufsize": 8192, 00:23:25.183 "large_bufsize": 135168 00:23:25.183 } 00:23:25.183 } 00:23:25.183 ] 00:23:25.183 }, 00:23:25.183 { 00:23:25.183 "subsystem": "sock", 00:23:25.183 "config": [ 00:23:25.183 { 00:23:25.183 "method": "sock_set_default_impl", 00:23:25.183 "params": { 00:23:25.183 "impl_name": "posix" 00:23:25.183 } 00:23:25.183 }, 00:23:25.183 { 00:23:25.183 "method": "sock_impl_set_options", 00:23:25.183 "params": { 00:23:25.183 "impl_name": "ssl", 00:23:25.183 "recv_buf_size": 4096, 00:23:25.183 "send_buf_size": 4096, 00:23:25.183 "enable_recv_pipe": true, 00:23:25.183 "enable_quickack": false, 00:23:25.183 "enable_placement_id": 0, 00:23:25.183 "enable_zerocopy_send_server": true, 00:23:25.183 "enable_zerocopy_send_client": false, 00:23:25.183 "zerocopy_threshold": 0, 00:23:25.183 "tls_version": 0, 00:23:25.183 "enable_ktls": false 00:23:25.183 } 00:23:25.183 }, 00:23:25.183 { 00:23:25.183 "method": "sock_impl_set_options", 00:23:25.183 "params": { 00:23:25.183 "impl_name": "posix", 00:23:25.183 "recv_buf_size": 2097152, 00:23:25.183 "send_buf_size": 2097152, 00:23:25.183 "enable_recv_pipe": true, 00:23:25.183 "enable_quickack": false, 00:23:25.183 "enable_placement_id": 0, 00:23:25.183 "enable_zerocopy_send_server": true, 00:23:25.183 "enable_zerocopy_send_client": false, 00:23:25.183 "zerocopy_threshold": 0, 00:23:25.183 "tls_version": 0, 00:23:25.183 "enable_ktls": false 00:23:25.183 } 00:23:25.183 } 00:23:25.183 ] 00:23:25.183 }, 00:23:25.183 { 00:23:25.183 "subsystem": "vmd", 00:23:25.183 "config": [] 00:23:25.183 }, 00:23:25.183 { 00:23:25.183 "subsystem": "accel", 00:23:25.183 "config": [ 00:23:25.183 { 00:23:25.183 "method": "accel_set_options", 00:23:25.183 "params": { 00:23:25.183 "small_cache_size": 128, 00:23:25.183 "large_cache_size": 16, 00:23:25.183 "task_count": 2048, 00:23:25.183 "sequence_count": 2048, 00:23:25.183 "buf_count": 2048 00:23:25.183 } 00:23:25.183 } 00:23:25.183 ] 00:23:25.183 }, 00:23:25.183 { 00:23:25.183 "subsystem": "bdev", 00:23:25.183 "config": [ 00:23:25.183 { 00:23:25.183 "method": "bdev_set_options", 00:23:25.183 "params": { 00:23:25.183 "bdev_io_pool_size": 65535, 00:23:25.183 "bdev_io_cache_size": 256, 00:23:25.183 "bdev_auto_examine": true, 00:23:25.183 "iobuf_small_cache_size": 128, 00:23:25.183 "iobuf_large_cache_size": 16 00:23:25.183 } 00:23:25.183 }, 00:23:25.183 { 00:23:25.183 "method": "bdev_raid_set_options", 00:23:25.183 "params": { 00:23:25.183 "process_window_size_kb": 1024, 00:23:25.183 "process_max_bandwidth_mb_sec": 0 00:23:25.183 } 00:23:25.183 }, 00:23:25.183 { 00:23:25.184 "method": "bdev_iscsi_set_options", 00:23:25.184 "params": { 00:23:25.184 "timeout_sec": 30 00:23:25.184 } 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "method": "bdev_nvme_set_options", 00:23:25.184 "params": { 00:23:25.184 "action_on_timeout": "none", 00:23:25.184 "timeout_us": 0, 00:23:25.184 "timeout_admin_us": 0, 00:23:25.184 "keep_alive_timeout_ms": 10000, 00:23:25.184 "arbitration_burst": 0, 00:23:25.184 "low_priority_weight": 0, 00:23:25.184 "medium_priority_weight": 0, 00:23:25.184 "high_priority_weight": 0, 00:23:25.184 "nvme_adminq_poll_period_us": 10000, 00:23:25.184 "nvme_ioq_poll_period_us": 0, 00:23:25.184 "io_queue_requests": 0, 00:23:25.184 "delay_cmd_submit": true, 00:23:25.184 "transport_retry_count": 4, 00:23:25.184 "bdev_retry_count": 3, 00:23:25.184 "transport_ack_timeout": 0, 00:23:25.184 "ctrlr_loss_timeout_sec": 0, 00:23:25.184 "reconnect_delay_sec": 0, 00:23:25.184 "fast_io_fail_timeout_sec": 0, 00:23:25.184 "disable_auto_failback": false, 00:23:25.184 "generate_uuids": false, 00:23:25.184 "transport_tos": 0, 00:23:25.184 "nvme_error_stat": false, 00:23:25.184 "rdma_srq_size": 0, 00:23:25.184 "io_path_stat": false, 00:23:25.184 "allow_accel_sequence": false, 00:23:25.184 "rdma_max_cq_size": 0, 00:23:25.184 "rdma_cm_event_timeout_ms": 0, 00:23:25.184 "dhchap_digests": [ 00:23:25.184 "sha256", 00:23:25.184 "sha384", 00:23:25.184 "sha512" 00:23:25.184 ], 00:23:25.184 "dhchap_dhgroups": [ 00:23:25.184 "null", 00:23:25.184 "ffdhe2048", 00:23:25.184 "ffdhe3072", 00:23:25.184 "ffdhe4096", 00:23:25.184 "ffdhe6144", 00:23:25.184 "ffdhe8192" 00:23:25.184 ] 00:23:25.184 } 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "method": "bdev_nvme_set_hotplug", 00:23:25.184 "params": { 00:23:25.184 "period_us": 100000, 00:23:25.184 "enable": false 00:23:25.184 } 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "method": "bdev_malloc_create", 00:23:25.184 "params": { 00:23:25.184 "name": "malloc0", 00:23:25.184 "num_blocks": 8192, 00:23:25.184 "block_size": 4096, 00:23:25.184 "physical_block_size": 4096, 00:23:25.184 "uuid": "ef515781-b53f-4bee-aa9f-8022c1eb1515", 00:23:25.184 "optimal_io_boundary": 0, 00:23:25.184 "md_size": 0, 00:23:25.184 "dif_type": 0, 00:23:25.184 "dif_is_head_of_md": false, 00:23:25.184 "dif_pi_format": 0 00:23:25.184 } 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "method": "bdev_wait_for_examine" 00:23:25.184 } 00:23:25.184 ] 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "subsystem": "nbd", 00:23:25.184 "config": [] 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "subsystem": "scheduler", 00:23:25.184 "config": [ 00:23:25.184 { 00:23:25.184 "method": "framework_set_scheduler", 00:23:25.184 "params": { 00:23:25.184 "name": "static" 00:23:25.184 } 00:23:25.184 } 00:23:25.184 ] 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "subsystem": "nvmf", 00:23:25.184 "config": [ 00:23:25.184 { 00:23:25.184 "method": "nvmf_set_config", 00:23:25.184 "params": { 00:23:25.184 "discovery_filter": "match_any", 00:23:25.184 "admin_cmd_passthru": { 00:23:25.184 "identify_ctrlr": false 00:23:25.184 }, 00:23:25.184 "dhchap_digests": [ 00:23:25.184 "sha256", 00:23:25.184 "sha384", 00:23:25.184 "sha512" 00:23:25.184 ], 00:23:25.184 "dhchap_dhgroups": [ 00:23:25.184 "null", 00:23:25.184 "ffdhe2048", 00:23:25.184 "ffdhe3072", 00:23:25.184 "ffdhe4096", 00:23:25.184 "ffdhe6144", 00:23:25.184 "ffdhe8192" 00:23:25.184 ] 00:23:25.184 } 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "method": "nvmf_set_max_subsystems", 00:23:25.184 "params": { 00:23:25.184 "max_subsystems": 1024 00:23:25.184 } 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "method": "nvmf_set_crdt", 00:23:25.184 "params": { 00:23:25.184 "crdt1": 0, 00:23:25.184 "crdt2": 0, 00:23:25.184 "crdt3": 0 00:23:25.184 } 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "method": "nvmf_create_transport", 00:23:25.184 "params": { 00:23:25.184 "trtype": "TCP", 00:23:25.184 "max_queue_depth": 128, 00:23:25.184 "max_io_qpairs_per_ctrlr": 127, 00:23:25.184 "in_capsule_data_size": 4096, 00:23:25.184 "max_io_size": 131072, 00:23:25.184 "io_unit_size": 131072, 00:23:25.184 "max_aq_depth": 128, 00:23:25.184 "num_shared_buffers": 511, 00:23:25.184 "buf_cache_size": 4294967295, 00:23:25.184 "dif_insert_or_strip": false, 00:23:25.184 "zcopy": false, 00:23:25.184 "c2h_success": false, 00:23:25.184 "sock_priority": 0, 00:23:25.184 "abort_timeout_sec": 1, 00:23:25.184 "ack_timeout": 0, 00:23:25.184 "data_wr_pool_size": 0 00:23:25.184 } 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "method": "nvmf_create_subsystem", 00:23:25.184 "params": { 00:23:25.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.184 "allow_any_host": false, 00:23:25.184 "serial_number": "00000000000000000000", 00:23:25.184 "model_number": "SPDK bdev Controller", 00:23:25.184 "max_namespaces": 32, 00:23:25.184 "min_cntlid": 1, 00:23:25.184 "max_cntlid": 65519, 00:23:25.184 "ana_reporting": false 00:23:25.184 } 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "method": "nvmf_subsystem_add_host", 00:23:25.184 "params": { 00:23:25.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.184 "host": "nqn.2016-06.io.spdk:host1", 00:23:25.184 "psk": "key0" 00:23:25.184 } 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "method": "nvmf_subsystem_add_ns", 00:23:25.184 "params": { 00:23:25.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.184 "namespace": { 00:23:25.184 "nsid": 1, 00:23:25.184 "bdev_name": "malloc0", 00:23:25.184 "nguid": "EF515781B53F4BEEAA9F8022C1EB1515", 00:23:25.184 "uuid": "ef515781-b53f-4bee-aa9f-8022c1eb1515", 00:23:25.184 "no_auto_visible": false 00:23:25.184 } 00:23:25.184 } 00:23:25.184 }, 00:23:25.184 { 00:23:25.184 "method": "nvmf_subsystem_add_listener", 00:23:25.184 "params": { 00:23:25.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.184 "listen_address": { 00:23:25.184 "trtype": "TCP", 00:23:25.184 "adrfam": "IPv4", 00:23:25.184 "traddr": "10.0.0.2", 00:23:25.184 "trsvcid": "4420" 00:23:25.184 }, 00:23:25.184 "secure_channel": false, 00:23:25.184 "sock_impl": "ssl" 00:23:25.184 } 00:23:25.184 } 00:23:25.184 ] 00:23:25.184 } 00:23:25.184 ] 00:23:25.184 }' 00:23:25.184 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:25.444 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:25.444 "subsystems": [ 00:23:25.444 { 00:23:25.444 "subsystem": "keyring", 00:23:25.444 "config": [ 00:23:25.444 { 00:23:25.444 "method": "keyring_file_add_key", 00:23:25.444 "params": { 00:23:25.444 "name": "key0", 00:23:25.444 "path": "/tmp/tmp.il7fgHUYpt" 00:23:25.444 } 00:23:25.444 } 00:23:25.444 ] 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "subsystem": "iobuf", 00:23:25.444 "config": [ 00:23:25.444 { 00:23:25.444 "method": "iobuf_set_options", 00:23:25.444 "params": { 00:23:25.444 "small_pool_count": 8192, 00:23:25.444 "large_pool_count": 1024, 00:23:25.444 "small_bufsize": 8192, 00:23:25.444 "large_bufsize": 135168 00:23:25.444 } 00:23:25.444 } 00:23:25.444 ] 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "subsystem": "sock", 00:23:25.444 "config": [ 00:23:25.444 { 00:23:25.444 "method": "sock_set_default_impl", 00:23:25.444 "params": { 00:23:25.444 "impl_name": "posix" 00:23:25.444 } 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "method": "sock_impl_set_options", 00:23:25.444 "params": { 00:23:25.444 "impl_name": "ssl", 00:23:25.444 "recv_buf_size": 4096, 00:23:25.444 "send_buf_size": 4096, 00:23:25.444 "enable_recv_pipe": true, 00:23:25.444 "enable_quickack": false, 00:23:25.444 "enable_placement_id": 0, 00:23:25.444 "enable_zerocopy_send_server": true, 00:23:25.444 "enable_zerocopy_send_client": false, 00:23:25.444 "zerocopy_threshold": 0, 00:23:25.444 "tls_version": 0, 00:23:25.444 "enable_ktls": false 00:23:25.444 } 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "method": "sock_impl_set_options", 00:23:25.444 "params": { 00:23:25.444 "impl_name": "posix", 00:23:25.444 "recv_buf_size": 2097152, 00:23:25.444 "send_buf_size": 2097152, 00:23:25.444 "enable_recv_pipe": true, 00:23:25.444 "enable_quickack": false, 00:23:25.444 "enable_placement_id": 0, 00:23:25.444 "enable_zerocopy_send_server": true, 00:23:25.444 "enable_zerocopy_send_client": false, 00:23:25.444 "zerocopy_threshold": 0, 00:23:25.444 "tls_version": 0, 00:23:25.444 "enable_ktls": false 00:23:25.444 } 00:23:25.444 } 00:23:25.444 ] 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "subsystem": "vmd", 00:23:25.444 "config": [] 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "subsystem": "accel", 00:23:25.444 "config": [ 00:23:25.444 { 00:23:25.444 "method": "accel_set_options", 00:23:25.444 "params": { 00:23:25.444 "small_cache_size": 128, 00:23:25.444 "large_cache_size": 16, 00:23:25.444 "task_count": 2048, 00:23:25.444 "sequence_count": 2048, 00:23:25.444 "buf_count": 2048 00:23:25.444 } 00:23:25.444 } 00:23:25.444 ] 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "subsystem": "bdev", 00:23:25.444 "config": [ 00:23:25.444 { 00:23:25.444 "method": "bdev_set_options", 00:23:25.444 "params": { 00:23:25.444 "bdev_io_pool_size": 65535, 00:23:25.444 "bdev_io_cache_size": 256, 00:23:25.444 "bdev_auto_examine": true, 00:23:25.444 "iobuf_small_cache_size": 128, 00:23:25.444 "iobuf_large_cache_size": 16 00:23:25.444 } 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "method": "bdev_raid_set_options", 00:23:25.444 "params": { 00:23:25.444 "process_window_size_kb": 1024, 00:23:25.444 "process_max_bandwidth_mb_sec": 0 00:23:25.444 } 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "method": "bdev_iscsi_set_options", 00:23:25.444 "params": { 00:23:25.444 "timeout_sec": 30 00:23:25.444 } 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "method": "bdev_nvme_set_options", 00:23:25.444 "params": { 00:23:25.444 "action_on_timeout": "none", 00:23:25.444 "timeout_us": 0, 00:23:25.444 "timeout_admin_us": 0, 00:23:25.444 "keep_alive_timeout_ms": 10000, 00:23:25.444 "arbitration_burst": 0, 00:23:25.444 "low_priority_weight": 0, 00:23:25.444 "medium_priority_weight": 0, 00:23:25.444 "high_priority_weight": 0, 00:23:25.444 "nvme_adminq_poll_period_us": 10000, 00:23:25.444 "nvme_ioq_poll_period_us": 0, 00:23:25.444 "io_queue_requests": 512, 00:23:25.444 "delay_cmd_submit": true, 00:23:25.444 "transport_retry_count": 4, 00:23:25.444 "bdev_retry_count": 3, 00:23:25.444 "transport_ack_timeout": 0, 00:23:25.444 "ctrlr_loss_timeout_sec": 0, 00:23:25.444 "reconnect_delay_sec": 0, 00:23:25.444 "fast_io_fail_timeout_sec": 0, 00:23:25.444 "disable_auto_failback": false, 00:23:25.444 "generate_uuids": false, 00:23:25.444 "transport_tos": 0, 00:23:25.444 "nvme_error_stat": false, 00:23:25.444 "rdma_srq_size": 0, 00:23:25.444 "io_path_stat": false, 00:23:25.444 "allow_accel_sequence": false, 00:23:25.444 "rdma_max_cq_size": 0, 00:23:25.444 "rdma_cm_event_timeout_ms": 0, 00:23:25.444 "dhchap_digests": [ 00:23:25.444 "sha256", 00:23:25.444 "sha384", 00:23:25.444 "sha512" 00:23:25.444 ], 00:23:25.444 "dhchap_dhgroups": [ 00:23:25.444 "null", 00:23:25.444 "ffdhe2048", 00:23:25.444 "ffdhe3072", 00:23:25.444 "ffdhe4096", 00:23:25.444 "ffdhe6144", 00:23:25.444 "ffdhe8192" 00:23:25.444 ] 00:23:25.444 } 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "method": "bdev_nvme_attach_controller", 00:23:25.444 "params": { 00:23:25.444 "name": "nvme0", 00:23:25.444 "trtype": "TCP", 00:23:25.444 "adrfam": "IPv4", 00:23:25.444 "traddr": "10.0.0.2", 00:23:25.444 "trsvcid": "4420", 00:23:25.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.444 "prchk_reftag": false, 00:23:25.444 "prchk_guard": false, 00:23:25.444 "ctrlr_loss_timeout_sec": 0, 00:23:25.444 "reconnect_delay_sec": 0, 00:23:25.444 "fast_io_fail_timeout_sec": 0, 00:23:25.444 "psk": "key0", 00:23:25.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.444 "hdgst": false, 00:23:25.444 "ddgst": false 00:23:25.444 } 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "method": "bdev_nvme_set_hotplug", 00:23:25.444 "params": { 00:23:25.444 "period_us": 100000, 00:23:25.444 "enable": false 00:23:25.444 } 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "method": "bdev_enable_histogram", 00:23:25.444 "params": { 00:23:25.444 "name": "nvme0n1", 00:23:25.444 "enable": true 00:23:25.444 } 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "method": "bdev_wait_for_examine" 00:23:25.444 } 00:23:25.444 ] 00:23:25.444 }, 00:23:25.444 { 00:23:25.444 "subsystem": "nbd", 00:23:25.444 "config": [] 00:23:25.444 } 00:23:25.444 ] 00:23:25.444 }' 00:23:25.444 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1292124 00:23:25.444 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1292124 ']' 00:23:25.444 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1292124 00:23:25.444 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:25.444 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:25.444 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1292124 00:23:25.444 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:25.444 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:25.444 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1292124' 00:23:25.444 killing process with pid 1292124 00:23:25.444 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1292124 00:23:25.444 Received shutdown signal, test time was about 1.000000 seconds 00:23:25.444 00:23:25.445 Latency(us) 00:23:25.445 [2024-12-07T08:57:54.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.445 [2024-12-07T08:57:54.171Z] =================================================================================================================== 00:23:25.445 [2024-12-07T08:57:54.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:25.445 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1292124 00:23:25.704 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1292016 00:23:25.704 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1292016 ']' 00:23:25.704 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1292016 00:23:25.704 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:25.704 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:25.704 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1292016 00:23:25.704 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:25.704 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:25.704 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1292016' 00:23:25.704 killing process with pid 1292016 00:23:25.704 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1292016 00:23:25.704 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1292016 00:23:25.964 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:25.964 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:25.964 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:25.964 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:25.964 "subsystems": [ 00:23:25.964 { 00:23:25.964 "subsystem": "keyring", 00:23:25.964 "config": [ 00:23:25.964 { 00:23:25.964 "method": "keyring_file_add_key", 00:23:25.964 "params": { 00:23:25.964 "name": "key0", 00:23:25.964 "path": "/tmp/tmp.il7fgHUYpt" 00:23:25.964 } 00:23:25.964 } 00:23:25.964 ] 00:23:25.964 }, 00:23:25.964 { 00:23:25.964 "subsystem": "iobuf", 00:23:25.964 "config": [ 00:23:25.964 { 00:23:25.964 "method": "iobuf_set_options", 00:23:25.964 "params": { 00:23:25.964 "small_pool_count": 8192, 00:23:25.964 "large_pool_count": 1024, 00:23:25.964 "small_bufsize": 8192, 00:23:25.964 "large_bufsize": 135168 00:23:25.964 } 00:23:25.964 } 00:23:25.964 ] 00:23:25.964 }, 00:23:25.964 { 00:23:25.964 "subsystem": "sock", 00:23:25.964 "config": [ 00:23:25.964 { 00:23:25.964 "method": "sock_set_default_impl", 00:23:25.964 "params": { 00:23:25.964 "impl_name": "posix" 00:23:25.964 } 00:23:25.964 }, 00:23:25.964 { 00:23:25.964 "method": "sock_impl_set_options", 00:23:25.964 "params": { 00:23:25.964 "impl_name": "ssl", 00:23:25.964 "recv_buf_size": 4096, 00:23:25.964 "send_buf_size": 4096, 00:23:25.964 "enable_recv_pipe": true, 00:23:25.964 "enable_quickack": false, 00:23:25.964 "enable_placement_id": 0, 00:23:25.964 "enable_zerocopy_send_server": true, 00:23:25.964 "enable_zerocopy_send_client": false, 00:23:25.964 "zerocopy_threshold": 0, 00:23:25.964 "tls_version": 0, 00:23:25.964 "enable_ktls": false 00:23:25.964 } 00:23:25.964 }, 00:23:25.964 { 00:23:25.964 "method": "sock_impl_set_options", 00:23:25.964 "params": { 00:23:25.964 "impl_name": "posix", 00:23:25.964 "recv_buf_size": 2097152, 00:23:25.964 "send_buf_size": 2097152, 00:23:25.964 "enable_recv_pipe": true, 00:23:25.964 "enable_quickack": false, 00:23:25.964 "enable_placement_id": 0, 00:23:25.964 "enable_zerocopy_send_server": true, 00:23:25.964 "enable_zerocopy_send_client": false, 00:23:25.964 "zerocopy_threshold": 0, 00:23:25.964 "tls_version": 0, 00:23:25.964 "enable_ktls": false 00:23:25.964 } 00:23:25.964 } 00:23:25.965 ] 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "subsystem": "vmd", 00:23:25.965 "config": [] 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "subsystem": "accel", 00:23:25.965 "config": [ 00:23:25.965 { 00:23:25.965 "method": "accel_set_options", 00:23:25.965 "params": { 00:23:25.965 "small_cache_size": 128, 00:23:25.965 "large_cache_size": 16, 00:23:25.965 "task_count": 2048, 00:23:25.965 "sequence_count": 2048, 00:23:25.965 "buf_count": 2048 00:23:25.965 } 00:23:25.965 } 00:23:25.965 ] 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "subsystem": "bdev", 00:23:25.965 "config": [ 00:23:25.965 { 00:23:25.965 "method": "bdev_set_options", 00:23:25.965 "params": { 00:23:25.965 "bdev_io_pool_size": 65535, 00:23:25.965 "bdev_io_cache_size": 256, 00:23:25.965 "bdev_auto_examine": true, 00:23:25.965 "iobuf_small_cache_size": 128, 00:23:25.965 "iobuf_large_cache_size": 16 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "bdev_raid_set_options", 00:23:25.965 "params": { 00:23:25.965 "process_window_size_kb": 1024, 00:23:25.965 "process_max_bandwidth_mb_sec": 0 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "bdev_iscsi_set_options", 00:23:25.965 "params": { 00:23:25.965 "timeout_sec": 30 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "bdev_nvme_set_options", 00:23:25.965 "params": { 00:23:25.965 "action_on_timeout": "none", 00:23:25.965 "timeout_us": 0, 00:23:25.965 "timeout_admin_us": 0, 00:23:25.965 "keep_alive_timeout_ms": 10000, 00:23:25.965 "arbitration_burst": 0, 00:23:25.965 "low_priority_weight": 0, 00:23:25.965 "medium_priority_weight": 0, 00:23:25.965 "high_priority_weight": 0, 00:23:25.965 "nvme_adminq_poll_period_us": 10000, 00:23:25.965 "nvme_ioq_poll_period_us": 0, 00:23:25.965 "io_queue_requests": 0, 00:23:25.965 "delay_cmd_submit": true, 00:23:25.965 "transport_retry_count": 4, 00:23:25.965 "bdev_retry_count": 3, 00:23:25.965 "transport_ack_timeout": 0, 00:23:25.965 "ctrlr_loss_timeout_sec": 0, 00:23:25.965 "reconnect_delay_sec": 0, 00:23:25.965 "fast_io_fail_timeout_sec": 0, 00:23:25.965 "disable_auto_failback": false, 00:23:25.965 "generate_uuids": false, 00:23:25.965 "transport_tos": 0, 00:23:25.965 "nvme_error_stat": false, 00:23:25.965 "rdma_srq_size": 0, 00:23:25.965 "io_path_stat": false, 00:23:25.965 "allow_accel_sequence": false, 00:23:25.965 "rdma_max_cq_size": 0, 00:23:25.965 "rdma_cm_event_timeout_ms": 0, 00:23:25.965 "dhchap_digests": [ 00:23:25.965 "sha256", 00:23:25.965 "sha384", 00:23:25.965 "sha512" 00:23:25.965 ], 00:23:25.965 "dhchap_dhgroups": [ 00:23:25.965 "null", 00:23:25.965 "ffdhe2048", 00:23:25.965 "ffdhe3072", 00:23:25.965 "ffdhe4096", 00:23:25.965 "ffdhe6144", 00:23:25.965 "ffdhe8192" 00:23:25.965 ] 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "bdev_nvme_set_hotplug", 00:23:25.965 "params": { 00:23:25.965 "period_us": 100000, 00:23:25.965 "enable": false 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "bdev_malloc_create", 00:23:25.965 "params": { 00:23:25.965 "name": "malloc0", 00:23:25.965 "num_blocks": 8192, 00:23:25.965 "block_size": 4096, 00:23:25.965 "physical_block_size": 4096, 00:23:25.965 "uuid": "ef515781-b53f-4bee-aa9f-8022c1eb1515", 00:23:25.965 "optimal_io_boundary": 0, 00:23:25.965 "md_size": 0, 00:23:25.965 "dif_type": 0, 00:23:25.965 "dif_is_head_of_md": false, 00:23:25.965 "dif_pi_format": 0 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "bdev_wait_for_examine" 00:23:25.965 } 00:23:25.965 ] 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "subsystem": "nbd", 00:23:25.965 "config": [] 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "subsystem": "scheduler", 00:23:25.965 "config": [ 00:23:25.965 { 00:23:25.965 "method": "framework_set_scheduler", 00:23:25.965 "params": { 00:23:25.965 "name": "static" 00:23:25.965 } 00:23:25.965 } 00:23:25.965 ] 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "subsystem": "nvmf", 00:23:25.965 "config": [ 00:23:25.965 { 00:23:25.965 "method": "nvmf_set_config", 00:23:25.965 "params": { 00:23:25.965 "discovery_filter": "match_any", 00:23:25.965 "admin_cmd_passthru": { 00:23:25.965 "identify_ctrlr": false 00:23:25.965 }, 00:23:25.965 "dhchap_digests": [ 00:23:25.965 "sha256", 00:23:25.965 "sha384", 00:23:25.965 "sha512" 00:23:25.965 ], 00:23:25.965 "dhchap_dhgroups": [ 00:23:25.965 "null", 00:23:25.965 "ffdhe2048", 00:23:25.965 "ffdhe3072", 00:23:25.965 "ffdhe4096", 00:23:25.965 "ffdhe6144", 00:23:25.965 "ffdhe8192" 00:23:25.965 ] 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "nvmf_set_max_subsystems", 00:23:25.965 "params": { 00:23:25.965 "max_subsystems": 1024 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "nvmf_set_crdt", 00:23:25.965 "params": { 00:23:25.965 "crdt1": 0, 00:23:25.965 "crdt2": 0, 00:23:25.965 "crdt3": 0 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "nvmf_create_transport", 00:23:25.965 "params": { 00:23:25.965 "trtype": "TCP", 00:23:25.965 "max_queue_depth": 128, 00:23:25.965 "max_io_qpairs_per_ctrlr": 127, 00:23:25.965 "in_capsule_data_size": 4096, 00:23:25.965 "max_io_size": 131072, 00:23:25.965 "io_unit_size": 131072, 00:23:25.965 "max_aq_depth": 128, 00:23:25.965 "num_shared_buffers": 511, 00:23:25.965 "buf_cache_size": 4294967295, 00:23:25.965 "dif_insert_or_strip": false, 00:23:25.965 "zcopy": false, 00:23:25.965 "c2h_success": false, 00:23:25.965 "sock_priority": 0, 00:23:25.965 "abort_timeout_sec": 1, 00:23:25.965 "ack_timeout": 0, 00:23:25.965 "data_wr_pool_size": 0 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "nvmf_create_subsystem", 00:23:25.965 "params": { 00:23:25.965 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.965 "allow_any_host": false, 00:23:25.965 "serial_number": "00000000000000000000", 00:23:25.965 "model_number": "SPDK bdev Controller", 00:23:25.965 "max_namespaces": 32, 00:23:25.965 "min_cntlid": 1, 00:23:25.965 "max_cntlid": 65519, 00:23:25.965 "ana_reporting": false 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "nvmf_subsystem_add_host", 00:23:25.965 "params": { 00:23:25.965 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.965 "host": "nqn.2016-06.io.spdk:host1", 00:23:25.965 "psk": "key0" 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "nvmf_subsystem_add_ns", 00:23:25.965 "params": { 00:23:25.965 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.965 "namespace": { 00:23:25.965 "nsid": 1, 00:23:25.965 "bdev_name": "malloc0", 00:23:25.965 "nguid": "EF515781B53F4BEEAA9F8022C1EB1515", 00:23:25.965 "uuid": "ef515781-b53f-4bee-aa9f-8022c1eb1515", 00:23:25.965 "no_auto_visible": false 00:23:25.965 } 00:23:25.965 } 00:23:25.965 }, 00:23:25.965 { 00:23:25.965 "method": "nvmf_subsystem_add_listener", 00:23:25.965 "params": { 00:23:25.965 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.965 "listen_address": { 00:23:25.965 "trtype": "TCP", 00:23:25.965 "adrfam": "IPv4", 00:23:25.965 "traddr": "10.0.0.2", 00:23:25.965 "trsvcid": "4420" 00:23:25.965 }, 00:23:25.965 "secure_channel": false, 00:23:25.965 "sock_impl": "ssl" 00:23:25.965 } 00:23:25.965 } 00:23:25.965 ] 00:23:25.965 } 00:23:25.965 ] 00:23:25.965 }' 00:23:25.965 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.965 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=1292599 00:23:25.965 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 1292599 00:23:25.965 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:25.965 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1292599 ']' 00:23:25.965 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.965 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.965 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.965 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.965 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.965 [2024-12-07 09:57:54.549398] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:25.965 [2024-12-07 09:57:54.549447] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.965 [2024-12-07 09:57:54.607629] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.966 [2024-12-07 09:57:54.643693] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.966 [2024-12-07 09:57:54.643734] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.966 [2024-12-07 09:57:54.643741] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.966 [2024-12-07 09:57:54.643747] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.966 [2024-12-07 09:57:54.643753] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.966 [2024-12-07 09:57:54.643823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.225 [2024-12-07 09:57:54.863161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.225 [2024-12-07 09:57:54.895135] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:26.225 [2024-12-07 09:57:54.895322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.796 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.796 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1292631 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1292631 /var/tmp/bdevperf.sock 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1292631 ']' 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:26.797 "subsystems": [ 00:23:26.797 { 00:23:26.797 "subsystem": "keyring", 00:23:26.797 "config": [ 00:23:26.797 { 00:23:26.797 "method": "keyring_file_add_key", 00:23:26.797 "params": { 00:23:26.797 "name": "key0", 00:23:26.797 "path": "/tmp/tmp.il7fgHUYpt" 00:23:26.797 } 00:23:26.797 } 00:23:26.797 ] 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "subsystem": "iobuf", 00:23:26.797 "config": [ 00:23:26.797 { 00:23:26.797 "method": "iobuf_set_options", 00:23:26.797 "params": { 00:23:26.797 "small_pool_count": 8192, 00:23:26.797 "large_pool_count": 1024, 00:23:26.797 "small_bufsize": 8192, 00:23:26.797 "large_bufsize": 135168 00:23:26.797 } 00:23:26.797 } 00:23:26.797 ] 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "subsystem": "sock", 00:23:26.797 "config": [ 00:23:26.797 { 00:23:26.797 "method": "sock_set_default_impl", 00:23:26.797 "params": { 00:23:26.797 "impl_name": "posix" 00:23:26.797 } 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "method": "sock_impl_set_options", 00:23:26.797 "params": { 00:23:26.797 "impl_name": "ssl", 00:23:26.797 "recv_buf_size": 4096, 00:23:26.797 "send_buf_size": 4096, 00:23:26.797 "enable_recv_pipe": true, 00:23:26.797 "enable_quickack": false, 00:23:26.797 "enable_placement_id": 0, 00:23:26.797 "enable_zerocopy_send_server": true, 00:23:26.797 "enable_zerocopy_send_client": false, 00:23:26.797 "zerocopy_threshold": 0, 00:23:26.797 "tls_version": 0, 00:23:26.797 "enable_ktls": false 00:23:26.797 } 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "method": "sock_impl_set_options", 00:23:26.797 "params": { 00:23:26.797 "impl_name": "posix", 00:23:26.797 "recv_buf_size": 2097152, 00:23:26.797 "send_buf_size": 2097152, 00:23:26.797 "enable_recv_pipe": true, 00:23:26.797 "enable_quickack": false, 00:23:26.797 "enable_placement_id": 0, 00:23:26.797 "enable_zerocopy_send_server": true, 00:23:26.797 "enable_zerocopy_send_client": false, 00:23:26.797 "zerocopy_threshold": 0, 00:23:26.797 "tls_version": 0, 00:23:26.797 "enable_ktls": false 00:23:26.797 } 00:23:26.797 } 00:23:26.797 ] 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "subsystem": "vmd", 00:23:26.797 "config": [] 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "subsystem": "accel", 00:23:26.797 "config": [ 00:23:26.797 { 00:23:26.797 "method": "accel_set_options", 00:23:26.797 "params": { 00:23:26.797 "small_cache_size": 128, 00:23:26.797 "large_cache_size": 16, 00:23:26.797 "task_count": 2048, 00:23:26.797 "sequence_count": 2048, 00:23:26.797 "buf_count": 2048 00:23:26.797 } 00:23:26.797 } 00:23:26.797 ] 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "subsystem": "bdev", 00:23:26.797 "config": [ 00:23:26.797 { 00:23:26.797 "method": "bdev_set_options", 00:23:26.797 "params": { 00:23:26.797 "bdev_io_pool_size": 65535, 00:23:26.797 "bdev_io_cache_size": 256, 00:23:26.797 "bdev_auto_examine": true, 00:23:26.797 "iobuf_small_cache_size": 128, 00:23:26.797 "iobuf_large_cache_size": 16 00:23:26.797 } 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "method": "bdev_raid_set_options", 00:23:26.797 "params": { 00:23:26.797 "process_window_size_kb": 1024, 00:23:26.797 "process_max_bandwidth_mb_sec": 0 00:23:26.797 } 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "method": "bdev_iscsi_set_options", 00:23:26.797 "params": { 00:23:26.797 "timeout_sec": 30 00:23:26.797 } 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "method": "bdev_nvme_set_options", 00:23:26.797 "params": { 00:23:26.797 "action_on_timeout": "none", 00:23:26.797 "timeout_us": 0, 00:23:26.797 "timeout_admin_us": 0, 00:23:26.797 "keep_alive_timeout_ms": 10000, 00:23:26.797 "arbitration_burst": 0, 00:23:26.797 "low_priority_weight": 0, 00:23:26.797 "medium_priority_weight": 0, 00:23:26.797 "high_priority_weight": 0, 00:23:26.797 "nvme_adminq_poll_period_us": 10000, 00:23:26.797 "nvme_ioq_poll_period_us": 0, 00:23:26.797 "io_queue_requests": 512, 00:23:26.797 "delay_cmd_submit": true, 00:23:26.797 "transport_retry_count": 4, 00:23:26.797 "bdev_retry_count": 3, 00:23:26.797 "transport_ack_timeout": 0, 00:23:26.797 "ctrlr_loss_timeout_sec": 0, 00:23:26.797 "reconnect_delay_sec": 0, 00:23:26.797 "fast_io_fail_timeout_sec": 0, 00:23:26.797 "disable_auto_failback": false, 00:23:26.797 "generate_uuids": false, 00:23:26.797 "transport_tos": 0, 00:23:26.797 "nvme_error_stat": false, 00:23:26.797 "rdma_srq_size": 0, 00:23:26.797 "io_path_stat": false, 00:23:26.797 "allow_accel_sequence": false, 00:23:26.797 "rdma_max_cq_size": 0, 00:23:26.797 "rdma_cm_event_timeout_ms": 0, 00:23:26.797 "dhchap_digests": [ 00:23:26.797 "sha256", 00:23:26.797 "sha384", 00:23:26.797 "sha512" 00:23:26.797 ], 00:23:26.797 "dhchap_dhgroups": [ 00:23:26.797 "null", 00:23:26.797 "ffdhe2048", 00:23:26.797 "ffdhe3072", 00:23:26.797 "ffdhe4096", 00:23:26.797 "ffdhe6144", 00:23:26.797 "ffdhe8192" 00:23:26.797 ] 00:23:26.797 } 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "method": "bdev_nvme_attach_controller", 00:23:26.797 "params": { 00:23:26.797 "name": "nvme0", 00:23:26.797 "trtype": "TCP", 00:23:26.797 "adrfam": "IPv4", 00:23:26.797 "traddr": "10.0.0.2", 00:23:26.797 "trsvcid": "4420", 00:23:26.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.797 "prchk_reftag": false, 00:23:26.797 "prchk_guard": false, 00:23:26.797 "ctrlr_loss_timeout_sec": 0, 00:23:26.797 "reconnect_delay_sec": 0, 00:23:26.797 "fast_io_fail_timeout_sec": 0, 00:23:26.797 "psk": "key0", 00:23:26.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.797 "hdgst": false, 00:23:26.797 "ddgst": false 00:23:26.797 } 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "method": "bdev_nvme_set_hotplug", 00:23:26.797 "params": { 00:23:26.797 "period_us": 100000, 00:23:26.797 "enable": false 00:23:26.797 } 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "method": "bdev_enable_histogram", 00:23:26.797 "params": { 00:23:26.797 "name": "nvme0n1", 00:23:26.797 "enable": true 00:23:26.797 } 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "method": "bdev_wait_for_examine" 00:23:26.797 } 00:23:26.797 ] 00:23:26.797 }, 00:23:26.797 { 00:23:26.797 "subsystem": "nbd", 00:23:26.797 "config": [] 00:23:26.797 } 00:23:26.797 ] 00:23:26.797 }' 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:26.797 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.797 [2024-12-07 09:57:55.454877] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:26.798 [2024-12-07 09:57:55.454929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292631 ] 00:23:26.798 [2024-12-07 09:57:55.510988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.055 [2024-12-07 09:57:55.551775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.055 [2024-12-07 09:57:55.698647] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.621 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:27.621 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:27.621 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:27.621 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:27.879 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.879 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.879 Running I/O for 1 seconds... 00:23:29.253 5308.00 IOPS, 20.73 MiB/s 00:23:29.253 Latency(us) 00:23:29.253 [2024-12-07T08:57:57.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.253 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:29.253 Verification LBA range: start 0x0 length 0x2000 00:23:29.253 nvme0n1 : 1.01 5358.39 20.93 0.00 0.00 23710.80 6183.18 33508.84 00:23:29.253 [2024-12-07T08:57:57.979Z] =================================================================================================================== 00:23:29.253 [2024-12-07T08:57:57.979Z] Total : 5358.39 20.93 0.00 0.00 23710.80 6183.18 33508.84 00:23:29.253 { 00:23:29.253 "results": [ 00:23:29.253 { 00:23:29.253 "job": "nvme0n1", 00:23:29.253 "core_mask": "0x2", 00:23:29.253 "workload": "verify", 00:23:29.253 "status": "finished", 00:23:29.253 "verify_range": { 00:23:29.253 "start": 0, 00:23:29.253 "length": 8192 00:23:29.253 }, 00:23:29.253 "queue_depth": 128, 00:23:29.253 "io_size": 4096, 00:23:29.253 "runtime": 1.01467, 00:23:29.253 "iops": 5358.392383730671, 00:23:29.253 "mibps": 20.931220248947934, 00:23:29.253 "io_failed": 0, 00:23:29.253 "io_timeout": 0, 00:23:29.253 "avg_latency_us": 23710.80069923471, 00:23:29.253 "min_latency_us": 6183.179130434783, 00:23:29.253 "max_latency_us": 33508.84173913043 00:23:29.253 } 00:23:29.253 ], 00:23:29.253 "core_count": 1 00:23:29.253 } 00:23:29.253 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:29.254 nvmf_trace.0 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1292631 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1292631 ']' 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1292631 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1292631 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1292631' 00:23:29.254 killing process with pid 1292631 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1292631 00:23:29.254 Received shutdown signal, test time was about 1.000000 seconds 00:23:29.254 00:23:29.254 Latency(us) 00:23:29.254 [2024-12-07T08:57:57.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.254 [2024-12-07T08:57:57.980Z] =================================================================================================================== 00:23:29.254 [2024-12-07T08:57:57.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1292631 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.254 rmmod nvme_tcp 00:23:29.254 rmmod nvme_fabrics 00:23:29.254 rmmod nvme_keyring 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 1292599 ']' 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 1292599 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1292599 ']' 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1292599 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.254 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1292599 00:23:29.512 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:29.512 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:29.512 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1292599' 00:23:29.512 killing process with pid 1292599 00:23:29.512 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1292599 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1292599 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.513 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yBQuCWndqe /tmp/tmp.cUzA0etz18 /tmp/tmp.il7fgHUYpt 00:23:32.046 00:23:32.046 real 1m18.746s 00:23:32.046 user 2m2.041s 00:23:32.046 sys 0m28.601s 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.046 ************************************ 00:23:32.046 END TEST nvmf_tls 00:23:32.046 ************************************ 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:32.046 ************************************ 00:23:32.046 START TEST nvmf_fips 00:23:32.046 ************************************ 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:32.046 * Looking for test storage... 00:23:32.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:32.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.046 --rc genhtml_branch_coverage=1 00:23:32.046 --rc genhtml_function_coverage=1 00:23:32.046 --rc genhtml_legend=1 00:23:32.046 --rc geninfo_all_blocks=1 00:23:32.046 --rc geninfo_unexecuted_blocks=1 00:23:32.046 00:23:32.046 ' 00:23:32.046 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:32.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.046 --rc genhtml_branch_coverage=1 00:23:32.047 --rc genhtml_function_coverage=1 00:23:32.047 --rc genhtml_legend=1 00:23:32.047 --rc geninfo_all_blocks=1 00:23:32.047 --rc geninfo_unexecuted_blocks=1 00:23:32.047 00:23:32.047 ' 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:32.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.047 --rc genhtml_branch_coverage=1 00:23:32.047 --rc genhtml_function_coverage=1 00:23:32.047 --rc genhtml_legend=1 00:23:32.047 --rc geninfo_all_blocks=1 00:23:32.047 --rc geninfo_unexecuted_blocks=1 00:23:32.047 00:23:32.047 ' 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:32.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.047 --rc genhtml_branch_coverage=1 00:23:32.047 --rc genhtml_function_coverage=1 00:23:32.047 --rc genhtml_legend=1 00:23:32.047 --rc geninfo_all_blocks=1 00:23:32.047 --rc geninfo_unexecuted_blocks=1 00:23:32.047 00:23:32.047 ' 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:32.047 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:32.048 Error setting digest 00:23:32.048 40C2253BBC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:32.048 40C2253BBC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:32.048 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:37.305 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:37.305 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:37.306 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:37.306 Found net devices under 0000:86:00.0: cvl_0_0 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:37.306 Found net devices under 0000:86:00.1: cvl_0_1 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:37.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:23:37.306 00:23:37.306 --- 10.0.0.2 ping statistics --- 00:23:37.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.306 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:23:37.306 00:23:37.306 --- 10.0.0.1 ping statistics --- 00:23:37.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.306 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:37.306 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:37.306 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:37.306 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:37.306 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:37.306 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=1296651 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 1296651 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1296651 ']' 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.564 [2024-12-07 09:58:06.100563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:37.564 [2024-12-07 09:58:06.100613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.564 [2024-12-07 09:58:06.159061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.564 [2024-12-07 09:58:06.197899] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.564 [2024-12-07 09:58:06.197942] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.564 [2024-12-07 09:58:06.197955] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.564 [2024-12-07 09:58:06.197961] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.564 [2024-12-07 09:58:06.197965] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.564 [2024-12-07 09:58:06.197982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:37.564 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.822 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.822 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:37.822 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:37.822 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:37.822 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.tGJ 00:23:37.822 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:37.822 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.tGJ 00:23:37.822 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.tGJ 00:23:37.822 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.tGJ 00:23:37.822 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:37.822 [2024-12-07 09:58:06.495793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.822 [2024-12-07 09:58:06.511787] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.822 [2024-12-07 09:58:06.511999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.080 malloc0 00:23:38.080 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.080 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1296685 00:23:38.080 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1296685 /var/tmp/bdevperf.sock 00:23:38.080 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.080 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1296685 ']' 00:23:38.080 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.080 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.080 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.080 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.080 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:38.080 [2024-12-07 09:58:06.644157] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:38.080 [2024-12-07 09:58:06.644206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296685 ] 00:23:38.080 [2024-12-07 09:58:06.694162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.080 [2024-12-07 09:58:06.733658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.338 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.338 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:38.338 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.tGJ 00:23:38.338 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.596 [2024-12-07 09:58:07.187326] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.596 TLSTESTn1 00:23:38.596 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.854 Running I/O for 10 seconds... 00:23:40.719 5268.00 IOPS, 20.58 MiB/s [2024-12-07T08:58:10.816Z] 5409.00 IOPS, 21.13 MiB/s [2024-12-07T08:58:11.747Z] 5449.00 IOPS, 21.29 MiB/s [2024-12-07T08:58:12.679Z] 5472.50 IOPS, 21.38 MiB/s [2024-12-07T08:58:13.611Z] 5517.60 IOPS, 21.55 MiB/s [2024-12-07T08:58:14.542Z] 5526.67 IOPS, 21.59 MiB/s [2024-12-07T08:58:15.473Z] 5515.14 IOPS, 21.54 MiB/s [2024-12-07T08:58:16.402Z] 5526.12 IOPS, 21.59 MiB/s [2024-12-07T08:58:17.774Z] 5548.00 IOPS, 21.67 MiB/s [2024-12-07T08:58:17.774Z] 5556.90 IOPS, 21.71 MiB/s 00:23:49.048 Latency(us) 00:23:49.048 [2024-12-07T08:58:17.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.048 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:49.048 Verification LBA range: start 0x0 length 0x2000 00:23:49.048 TLSTESTn1 : 10.02 5561.02 21.72 0.00 0.00 22981.57 6183.18 27126.21 00:23:49.048 [2024-12-07T08:58:17.774Z] =================================================================================================================== 00:23:49.048 [2024-12-07T08:58:17.774Z] Total : 5561.02 21.72 0.00 0.00 22981.57 6183.18 27126.21 00:23:49.048 { 00:23:49.048 "results": [ 00:23:49.048 { 00:23:49.048 "job": "TLSTESTn1", 00:23:49.048 "core_mask": "0x4", 00:23:49.048 "workload": "verify", 00:23:49.048 "status": "finished", 00:23:49.048 "verify_range": { 00:23:49.048 "start": 0, 00:23:49.048 "length": 8192 00:23:49.048 }, 00:23:49.048 "queue_depth": 128, 00:23:49.048 "io_size": 4096, 00:23:49.048 "runtime": 10.015071, 00:23:49.048 "iops": 5561.018988282759, 00:23:49.048 "mibps": 21.722730422979527, 00:23:49.048 "io_failed": 0, 00:23:49.048 "io_timeout": 0, 00:23:49.048 "avg_latency_us": 22981.57104900848, 00:23:49.048 "min_latency_us": 6183.179130434783, 00:23:49.048 "max_latency_us": 27126.205217391303 00:23:49.048 } 00:23:49.048 ], 00:23:49.048 "core_count": 1 00:23:49.048 } 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:49.048 nvmf_trace.0 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1296685 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1296685 ']' 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1296685 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1296685 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1296685' 00:23:49.048 killing process with pid 1296685 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1296685 00:23:49.048 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.048 00:23:49.048 Latency(us) 00:23:49.048 [2024-12-07T08:58:17.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.048 [2024-12-07T08:58:17.774Z] =================================================================================================================== 00:23:49.048 [2024-12-07T08:58:17.774Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1296685 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:49.048 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:49.048 rmmod nvme_tcp 00:23:49.048 rmmod nvme_fabrics 00:23:49.306 rmmod nvme_keyring 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 1296651 ']' 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 1296651 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1296651 ']' 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1296651 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1296651 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1296651' 00:23:49.306 killing process with pid 1296651 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1296651 00:23:49.306 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1296651 00:23:49.564 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:49.564 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:49.564 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:49.564 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:49.564 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:23:49.564 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:49.564 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:23:49.564 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:49.564 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:49.564 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.564 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.564 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.472 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:51.472 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.tGJ 00:23:51.472 00:23:51.472 real 0m19.746s 00:23:51.472 user 0m21.523s 00:23:51.472 sys 0m8.578s 00:23:51.472 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:51.472 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.472 ************************************ 00:23:51.472 END TEST nvmf_fips 00:23:51.472 ************************************ 00:23:51.472 09:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:51.472 09:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:51.472 09:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:51.472 09:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:51.472 ************************************ 00:23:51.472 START TEST nvmf_control_msg_list 00:23:51.472 ************************************ 00:23:51.472 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:51.731 * Looking for test storage... 00:23:51.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:51.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.731 --rc genhtml_branch_coverage=1 00:23:51.731 --rc genhtml_function_coverage=1 00:23:51.731 --rc genhtml_legend=1 00:23:51.731 --rc geninfo_all_blocks=1 00:23:51.731 --rc geninfo_unexecuted_blocks=1 00:23:51.731 00:23:51.731 ' 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:51.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.731 --rc genhtml_branch_coverage=1 00:23:51.731 --rc genhtml_function_coverage=1 00:23:51.731 --rc genhtml_legend=1 00:23:51.731 --rc geninfo_all_blocks=1 00:23:51.731 --rc geninfo_unexecuted_blocks=1 00:23:51.731 00:23:51.731 ' 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:51.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.731 --rc genhtml_branch_coverage=1 00:23:51.731 --rc genhtml_function_coverage=1 00:23:51.731 --rc genhtml_legend=1 00:23:51.731 --rc geninfo_all_blocks=1 00:23:51.731 --rc geninfo_unexecuted_blocks=1 00:23:51.731 00:23:51.731 ' 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:51.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.731 --rc genhtml_branch_coverage=1 00:23:51.731 --rc genhtml_function_coverage=1 00:23:51.731 --rc genhtml_legend=1 00:23:51.731 --rc geninfo_all_blocks=1 00:23:51.731 --rc geninfo_unexecuted_blocks=1 00:23:51.731 00:23:51.731 ' 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:51.731 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:51.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:51.732 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:56.990 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.990 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:56.990 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:56.990 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:56.990 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:56.990 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:56.990 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:56.990 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:56.990 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:56.990 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:56.991 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:56.991 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:56.991 Found net devices under 0000:86:00.0: cvl_0_0 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:56.991 Found net devices under 0000:86:00.1: cvl_0_1 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.991 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:57.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:23:57.249 00:23:57.249 --- 10.0.0.2 ping statistics --- 00:23:57.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.249 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:23:57.249 00:23:57.249 --- 10.0.0.1 ping statistics --- 00:23:57.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.249 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=1302044 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 1302044 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1302044 ']' 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.249 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:57.249 [2024-12-07 09:58:25.907040] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:57.249 [2024-12-07 09:58:25.907085] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.249 [2024-12-07 09:58:25.964272] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.507 [2024-12-07 09:58:26.005227] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.507 [2024-12-07 09:58:26.005264] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.507 [2024-12-07 09:58:26.005271] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.507 [2024-12-07 09:58:26.005278] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.507 [2024-12-07 09:58:26.005282] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.507 [2024-12-07 09:58:26.005298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.507 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.507 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:23:57.507 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:57.507 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.507 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:57.507 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.507 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:57.507 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:57.507 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:57.507 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.507 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:57.507 [2024-12-07 09:58:26.130518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:57.508 Malloc0 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:57.508 [2024-12-07 09:58:26.179847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1302064 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1302065 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1302066 00:23:57.508 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1302064 00:23:57.508 [2024-12-07 09:58:26.224357] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:57.508 [2024-12-07 09:58:26.224566] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:57.765 [2024-12-07 09:58:26.234405] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:58.698 Initializing NVMe Controllers 00:23:58.698 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:58.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:58.698 Initialization complete. Launching workers. 00:23:58.698 ======================================================== 00:23:58.698 Latency(us) 00:23:58.698 Device Information : IOPS MiB/s Average min max 00:23:58.698 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40889.79 40681.13 41069.75 00:23:58.698 ======================================================== 00:23:58.698 Total : 25.00 0.10 40889.79 40681.13 41069.75 00:23:58.698 00:23:58.956 Initializing NVMe Controllers 00:23:58.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:58.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:58.956 Initialization complete. Launching workers. 00:23:58.956 ======================================================== 00:23:58.956 Latency(us) 00:23:58.956 Device Information : IOPS MiB/s Average min max 00:23:58.956 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40928.93 40601.72 41895.74 00:23:58.956 ======================================================== 00:23:58.956 Total : 25.00 0.10 40928.93 40601.72 41895.74 00:23:58.956 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1302065 00:23:58.956 Initializing NVMe Controllers 00:23:58.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:58.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:58.956 Initialization complete. Launching workers. 00:23:58.956 ======================================================== 00:23:58.956 Latency(us) 00:23:58.956 Device Information : IOPS MiB/s Average min max 00:23:58.956 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 30.00 0.12 34132.13 186.54 41350.62 00:23:58.956 ======================================================== 00:23:58.956 Total : 30.00 0.12 34132.13 186.54 41350.62 00:23:58.956 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1302066 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:58.956 rmmod nvme_tcp 00:23:58.956 rmmod nvme_fabrics 00:23:58.956 rmmod nvme_keyring 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 1302044 ']' 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 1302044 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1302044 ']' 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1302044 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1302044 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1302044' 00:23:58.956 killing process with pid 1302044 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1302044 00:23:58.956 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1302044 00:23:59.214 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:59.214 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:59.214 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:59.214 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:59.214 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:59.214 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:23:59.214 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:23:59.214 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.214 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:59.214 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.214 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.214 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.749 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:01.749 00:24:01.749 real 0m9.720s 00:24:01.749 user 0m6.825s 00:24:01.749 sys 0m4.962s 00:24:01.749 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:01.749 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:01.749 ************************************ 00:24:01.749 END TEST nvmf_control_msg_list 00:24:01.749 ************************************ 00:24:01.749 09:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:01.749 09:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:01.749 09:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:01.749 09:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:01.749 ************************************ 00:24:01.749 START TEST nvmf_wait_for_buf 00:24:01.749 ************************************ 00:24:01.749 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:01.749 * Looking for test storage... 00:24:01.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:01.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.749 --rc genhtml_branch_coverage=1 00:24:01.749 --rc genhtml_function_coverage=1 00:24:01.749 --rc genhtml_legend=1 00:24:01.749 --rc geninfo_all_blocks=1 00:24:01.749 --rc geninfo_unexecuted_blocks=1 00:24:01.749 00:24:01.749 ' 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:01.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.749 --rc genhtml_branch_coverage=1 00:24:01.749 --rc genhtml_function_coverage=1 00:24:01.749 --rc genhtml_legend=1 00:24:01.749 --rc geninfo_all_blocks=1 00:24:01.749 --rc geninfo_unexecuted_blocks=1 00:24:01.749 00:24:01.749 ' 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:01.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.749 --rc genhtml_branch_coverage=1 00:24:01.749 --rc genhtml_function_coverage=1 00:24:01.749 --rc genhtml_legend=1 00:24:01.749 --rc geninfo_all_blocks=1 00:24:01.749 --rc geninfo_unexecuted_blocks=1 00:24:01.749 00:24:01.749 ' 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:01.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.749 --rc genhtml_branch_coverage=1 00:24:01.749 --rc genhtml_function_coverage=1 00:24:01.749 --rc genhtml_legend=1 00:24:01.749 --rc geninfo_all_blocks=1 00:24:01.749 --rc geninfo_unexecuted_blocks=1 00:24:01.749 00:24:01.749 ' 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.749 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:01.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:01.750 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.013 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:07.014 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:07.014 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:07.014 Found net devices under 0000:86:00.0: cvl_0_0 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:07.014 Found net devices under 0000:86:00.1: cvl_0_1 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:07.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:24:07.014 00:24:07.014 --- 10.0.0.2 ping statistics --- 00:24:07.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.014 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:24:07.014 00:24:07.014 --- 10.0.0.1 ping statistics --- 00:24:07.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.014 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=1305807 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 1305807 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1305807 ']' 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:07.014 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.014 [2024-12-07 09:58:35.627126] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:07.014 [2024-12-07 09:58:35.627171] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.014 [2024-12-07 09:58:35.685636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.014 [2024-12-07 09:58:35.726123] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.014 [2024-12-07 09:58:35.726161] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.014 [2024-12-07 09:58:35.726169] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.014 [2024-12-07 09:58:35.726175] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.014 [2024-12-07 09:58:35.726180] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.014 [2024-12-07 09:58:35.726203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.271 Malloc0 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.271 [2024-12-07 09:58:35.909492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.271 [2024-12-07 09:58:35.941709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.271 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:07.527 [2024-12-07 09:58:35.999020] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:08.906 Initializing NVMe Controllers 00:24:08.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:08.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:08.906 Initialization complete. Launching workers. 00:24:08.906 ======================================================== 00:24:08.906 Latency(us) 00:24:08.906 Device Information : IOPS MiB/s Average min max 00:24:08.906 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.54 16.07 32208.34 7292.34 63846.03 00:24:08.906 ======================================================== 00:24:08.907 Total : 128.54 16.07 32208.34 7292.34 63846.03 00:24:08.907 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:08.907 rmmod nvme_tcp 00:24:08.907 rmmod nvme_fabrics 00:24:08.907 rmmod nvme_keyring 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 1305807 ']' 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 1305807 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1305807 ']' 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1305807 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1305807 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1305807' 00:24:08.907 killing process with pid 1305807 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1305807 00:24:08.907 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1305807 00:24:09.164 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:09.164 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:09.164 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:09.164 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:09.164 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:24:09.164 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:09.164 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:24:09.164 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.164 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:09.164 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.164 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.164 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.062 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:11.062 00:24:11.062 real 0m9.787s 00:24:11.062 user 0m3.711s 00:24:11.062 sys 0m4.482s 00:24:11.062 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:11.062 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:11.062 ************************************ 00:24:11.062 END TEST nvmf_wait_for_buf 00:24:11.062 ************************************ 00:24:11.062 09:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:11.062 09:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:11.062 09:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:11.062 09:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:11.062 09:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:11.320 ************************************ 00:24:11.320 START TEST nvmf_fuzz 00:24:11.320 ************************************ 00:24:11.320 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:11.320 * Looking for test storage... 00:24:11.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:11.320 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:11.320 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:24:11.320 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:11.320 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:11.320 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.320 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.320 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.320 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.320 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.320 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:11.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.321 --rc genhtml_branch_coverage=1 00:24:11.321 --rc genhtml_function_coverage=1 00:24:11.321 --rc genhtml_legend=1 00:24:11.321 --rc geninfo_all_blocks=1 00:24:11.321 --rc geninfo_unexecuted_blocks=1 00:24:11.321 00:24:11.321 ' 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:11.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.321 --rc genhtml_branch_coverage=1 00:24:11.321 --rc genhtml_function_coverage=1 00:24:11.321 --rc genhtml_legend=1 00:24:11.321 --rc geninfo_all_blocks=1 00:24:11.321 --rc geninfo_unexecuted_blocks=1 00:24:11.321 00:24:11.321 ' 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:11.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.321 --rc genhtml_branch_coverage=1 00:24:11.321 --rc genhtml_function_coverage=1 00:24:11.321 --rc genhtml_legend=1 00:24:11.321 --rc geninfo_all_blocks=1 00:24:11.321 --rc geninfo_unexecuted_blocks=1 00:24:11.321 00:24:11.321 ' 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:11.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.321 --rc genhtml_branch_coverage=1 00:24:11.321 --rc genhtml_function_coverage=1 00:24:11.321 --rc genhtml_legend=1 00:24:11.321 --rc geninfo_all_blocks=1 00:24:11.321 --rc geninfo_unexecuted_blocks=1 00:24:11.321 00:24:11.321 ' 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:11.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.321 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.321 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:11.321 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:11.321 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:11.321 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:17.869 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:17.869 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:17.869 Found net devices under 0000:86:00.0: cvl_0_0 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:17.869 Found net devices under 0000:86:00.1: cvl_0_1 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:17.869 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # is_hw=yes 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:17.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:24:17.870 00:24:17.870 --- 10.0.0.2 ping statistics --- 00:24:17.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.870 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:24:17.870 00:24:17.870 --- 10.0.0.1 ping statistics --- 00:24:17.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.870 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # return 0 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1309581 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1309581 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1309581 ']' 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.870 Malloc0 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:17.870 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:49.934 Fuzzing completed. Shutting down the fuzz application 00:24:49.934 00:24:49.934 Dumping successful admin opcodes: 00:24:49.934 8, 9, 10, 24, 00:24:49.934 Dumping successful io opcodes: 00:24:49.934 0, 9, 00:24:49.934 NS: 0x200003aeff00 I/O qp, Total commands completed: 990862, total successful commands: 5802, random_seed: 2727942592 00:24:49.934 NS: 0x200003aeff00 admin qp, Total commands completed: 128877, total successful commands: 1047, random_seed: 435506048 00:24:49.934 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:49.934 Fuzzing completed. Shutting down the fuzz application 00:24:49.934 00:24:49.934 Dumping successful admin opcodes: 00:24:49.934 24, 00:24:49.934 Dumping successful io opcodes: 00:24:49.934 00:24:49.934 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1510667968 00:24:49.934 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1510741768 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:49.934 rmmod nvme_tcp 00:24:49.934 rmmod nvme_fabrics 00:24:49.934 rmmod nvme_keyring 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 1309581 ']' 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 1309581 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1309581 ']' 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 1309581 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1309581 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1309581' 00:24:49.934 killing process with pid 1309581 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 1309581 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 1309581 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.934 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.307 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:51.307 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:51.565 00:24:51.565 real 0m40.273s 00:24:51.565 user 0m54.063s 00:24:51.565 sys 0m15.796s 00:24:51.565 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:51.565 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:51.565 ************************************ 00:24:51.565 END TEST nvmf_fuzz 00:24:51.565 ************************************ 00:24:51.565 09:59:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:51.565 09:59:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:51.565 09:59:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:51.565 09:59:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:51.565 ************************************ 00:24:51.565 START TEST nvmf_multiconnection 00:24:51.565 ************************************ 00:24:51.565 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:51.565 * Looking for test storage... 00:24:51.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:51.565 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:51.565 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:51.565 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:51.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.824 --rc genhtml_branch_coverage=1 00:24:51.824 --rc genhtml_function_coverage=1 00:24:51.824 --rc genhtml_legend=1 00:24:51.824 --rc geninfo_all_blocks=1 00:24:51.824 --rc geninfo_unexecuted_blocks=1 00:24:51.824 00:24:51.824 ' 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:51.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.824 --rc genhtml_branch_coverage=1 00:24:51.824 --rc genhtml_function_coverage=1 00:24:51.824 --rc genhtml_legend=1 00:24:51.824 --rc geninfo_all_blocks=1 00:24:51.824 --rc geninfo_unexecuted_blocks=1 00:24:51.824 00:24:51.824 ' 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:51.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.824 --rc genhtml_branch_coverage=1 00:24:51.824 --rc genhtml_function_coverage=1 00:24:51.824 --rc genhtml_legend=1 00:24:51.824 --rc geninfo_all_blocks=1 00:24:51.824 --rc geninfo_unexecuted_blocks=1 00:24:51.824 00:24:51.824 ' 00:24:51.824 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:51.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.824 --rc genhtml_branch_coverage=1 00:24:51.824 --rc genhtml_function_coverage=1 00:24:51.824 --rc genhtml_legend=1 00:24:51.824 --rc geninfo_all_blocks=1 00:24:51.825 --rc geninfo_unexecuted_blocks=1 00:24:51.825 00:24:51.825 ' 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.825 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:58.383 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:58.383 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:58.383 Found net devices under 0000:86:00.0: cvl_0_0 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:58.383 Found net devices under 0000:86:00.1: cvl_0_1 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # is_hw=yes 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.383 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.384 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.384 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:24:58.384 00:24:58.384 --- 10.0.0.2 ping statistics --- 00:24:58.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.384 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:24:58.384 00:24:58.384 --- 10.0.0.1 ping statistics --- 00:24:58.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.384 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # return 0 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=1318134 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 1318134 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 1318134 ']' 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 [2024-12-07 09:59:26.154939] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:58.384 [2024-12-07 09:59:26.154990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.384 [2024-12-07 09:59:26.212562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.384 [2024-12-07 09:59:26.255052] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.384 [2024-12-07 09:59:26.255096] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.384 [2024-12-07 09:59:26.255104] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.384 [2024-12-07 09:59:26.255110] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.384 [2024-12-07 09:59:26.255115] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.384 [2024-12-07 09:59:26.255210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.384 [2024-12-07 09:59:26.255306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.384 [2024-12-07 09:59:26.255371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.384 [2024-12-07 09:59:26.255372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 [2024-12-07 09:59:26.406637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 Malloc1 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 [2024-12-07 09:59:26.465039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 Malloc2 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.384 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 Malloc3 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 Malloc4 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 Malloc5 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 Malloc6 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 Malloc7 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 Malloc8 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.385 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 Malloc9 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 Malloc10 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 Malloc11 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.386 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:59.317 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:59.317 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:59.317 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:59.317 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:59.317 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:01.843 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:01.843 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:01.843 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:01.843 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:01.843 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.843 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:01.843 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.843 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:02.776 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:02.776 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:02.776 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:02.776 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:02.776 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:04.674 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:04.674 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:04.674 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:04.674 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:04.674 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:04.674 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:04.674 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.674 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:06.043 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:06.043 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:06.043 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:06.043 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:06.043 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:07.936 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:07.936 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:07.936 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:07.936 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:07.936 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:07.936 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:07.936 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.936 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:09.304 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:09.304 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:09.304 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.304 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:09.304 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:11.275 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:11.275 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:11.275 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:11.275 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:11.275 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:11.275 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:11.275 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.275 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:12.256 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:12.256 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:12.256 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.256 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:12.256 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:14.161 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:14.161 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:14.161 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:14.161 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:14.161 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:14.161 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:14.161 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.161 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:15.532 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:15.532 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:15.532 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.532 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:15.532 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:17.425 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:17.425 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:17.425 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:17.425 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:17.425 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.425 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:17.425 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.425 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:18.794 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:18.794 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:18.794 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.794 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:18.794 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:20.691 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:20.691 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:20.691 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:20.691 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:20.691 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:20.691 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:20.691 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.691 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:22.059 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:22.059 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.059 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:22.059 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:22.059 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:24.594 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:24.594 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:24.594 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:24.594 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:24.594 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.594 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:24.594 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.594 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:25.525 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:25.525 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:25.525 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.525 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:25.525 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:27.420 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:27.420 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:27.420 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:27.420 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:27.420 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:27.420 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:27.420 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.420 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:28.788 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:28.789 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:28.789 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.789 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:28.789 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:31.310 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:31.310 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:31.310 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:31.310 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:31.310 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.310 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:31.311 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.311 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:32.243 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:32.243 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:32.243 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.243 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:32.243 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:34.138 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:34.138 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:34.138 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:34.138 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:34.138 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.138 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:34.138 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:34.138 [global] 00:25:34.138 thread=1 00:25:34.138 invalidate=1 00:25:34.138 rw=read 00:25:34.138 time_based=1 00:25:34.138 runtime=10 00:25:34.138 ioengine=libaio 00:25:34.138 direct=1 00:25:34.138 bs=262144 00:25:34.138 iodepth=64 00:25:34.138 norandommap=1 00:25:34.138 numjobs=1 00:25:34.138 00:25:34.395 [job0] 00:25:34.396 filename=/dev/nvme0n1 00:25:34.396 [job1] 00:25:34.396 filename=/dev/nvme10n1 00:25:34.396 [job2] 00:25:34.396 filename=/dev/nvme1n1 00:25:34.396 [job3] 00:25:34.396 filename=/dev/nvme2n1 00:25:34.396 [job4] 00:25:34.396 filename=/dev/nvme3n1 00:25:34.396 [job5] 00:25:34.396 filename=/dev/nvme4n1 00:25:34.396 [job6] 00:25:34.396 filename=/dev/nvme5n1 00:25:34.396 [job7] 00:25:34.396 filename=/dev/nvme6n1 00:25:34.396 [job8] 00:25:34.396 filename=/dev/nvme7n1 00:25:34.396 [job9] 00:25:34.396 filename=/dev/nvme8n1 00:25:34.396 [job10] 00:25:34.396 filename=/dev/nvme9n1 00:25:34.396 Could not set queue depth (nvme0n1) 00:25:34.396 Could not set queue depth (nvme10n1) 00:25:34.396 Could not set queue depth (nvme1n1) 00:25:34.396 Could not set queue depth (nvme2n1) 00:25:34.396 Could not set queue depth (nvme3n1) 00:25:34.396 Could not set queue depth (nvme4n1) 00:25:34.396 Could not set queue depth (nvme5n1) 00:25:34.396 Could not set queue depth (nvme6n1) 00:25:34.396 Could not set queue depth (nvme7n1) 00:25:34.396 Could not set queue depth (nvme8n1) 00:25:34.396 Could not set queue depth (nvme9n1) 00:25:34.653 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.653 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.653 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.653 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.653 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.653 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.653 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.653 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.653 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.653 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.653 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.653 fio-3.35 00:25:34.653 Starting 11 threads 00:25:46.846 00:25:46.846 job0: (groupid=0, jobs=1): err= 0: pid=1324704: Sat Dec 7 10:00:13 2024 00:25:46.846 read: IOPS=298, BW=74.7MiB/s (78.3MB/s)(762MiB/10204msec) 00:25:46.846 slat (usec): min=9, max=303840, avg=1880.13, stdev=12351.98 00:25:46.846 clat (usec): min=1871, max=1052.2k, avg=212149.78, stdev=189815.11 00:25:46.846 lat (usec): min=1914, max=1052.2k, avg=214029.91, stdev=191506.16 00:25:46.846 clat percentiles (msec): 00:25:46.846 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 14], 20.00th=[ 53], 00:25:46.846 | 30.00th=[ 113], 40.00th=[ 150], 50.00th=[ 174], 60.00th=[ 201], 00:25:46.846 | 70.00th=[ 236], 80.00th=[ 296], 90.00th=[ 456], 95.00th=[ 684], 00:25:46.846 | 99.00th=[ 869], 99.50th=[ 885], 99.90th=[ 894], 99.95th=[ 1053], 00:25:46.846 | 99.99th=[ 1053] 00:25:46.846 bw ( KiB/s): min=14336, max=189440, per=9.83%, avg=76390.40, stdev=50066.88, samples=20 00:25:46.846 iops : min= 56, max= 740, avg=298.40, stdev=195.57, samples=20 00:25:46.846 lat (msec) : 2=0.07%, 4=0.59%, 10=4.79%, 20=7.38%, 50=6.79% 00:25:46.846 lat (msec) : 100=7.45%, 250=45.21%, 500=19.55%, 750=4.79%, 1000=3.28% 00:25:46.846 lat (msec) : 2000=0.10% 00:25:46.846 cpu : usr=0.09%, sys=1.11%, ctx=777, majf=0, minf=4097 00:25:46.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:25:46.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.846 issued rwts: total=3048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.846 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.846 job1: (groupid=0, jobs=1): err= 0: pid=1324707: Sat Dec 7 10:00:13 2024 00:25:46.846 read: IOPS=443, BW=111MiB/s (116MB/s)(1132MiB/10206msec) 00:25:46.846 slat (usec): min=18, max=127375, avg=1547.94, stdev=6755.31 00:25:46.846 clat (usec): min=914, max=926921, avg=142510.59, stdev=128060.04 00:25:46.846 lat (usec): min=947, max=926955, avg=144058.53, stdev=128898.87 00:25:46.846 clat percentiles (msec): 00:25:46.846 | 1.00th=[ 6], 5.00th=[ 22], 10.00th=[ 45], 20.00th=[ 50], 00:25:46.846 | 30.00th=[ 55], 40.00th=[ 72], 50.00th=[ 95], 60.00th=[ 148], 00:25:46.846 | 70.00th=[ 180], 80.00th=[ 228], 90.00th=[ 284], 95.00th=[ 351], 00:25:46.846 | 99.00th=[ 768], 99.50th=[ 852], 99.90th=[ 902], 99.95th=[ 902], 00:25:46.846 | 99.99th=[ 927] 00:25:46.846 bw ( KiB/s): min=37376, max=313344, per=14.71%, avg=114278.40, stdev=72302.02, samples=20 00:25:46.846 iops : min= 146, max= 1224, avg=446.40, stdev=282.43, samples=20 00:25:46.846 lat (usec) : 1000=0.07% 00:25:46.846 lat (msec) : 2=0.15%, 4=0.18%, 10=1.79%, 20=2.50%, 50=16.36% 00:25:46.846 lat (msec) : 100=29.51%, 250=33.59%, 500=14.20%, 750=0.53%, 1000=1.13% 00:25:46.846 cpu : usr=0.21%, sys=1.89%, ctx=1150, majf=0, minf=3722 00:25:46.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:46.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.846 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.846 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.846 job2: (groupid=0, jobs=1): err= 0: pid=1324708: Sat Dec 7 10:00:13 2024 00:25:46.846 read: IOPS=174, BW=43.5MiB/s (45.6MB/s)(444MiB/10205msec) 00:25:46.846 slat (usec): min=9, max=506617, avg=3035.07, stdev=23966.16 00:25:46.846 clat (usec): min=1002, max=1141.0k, avg=364046.35, stdev=293659.25 00:25:46.846 lat (usec): min=1038, max=1261.3k, avg=367081.42, stdev=296274.09 00:25:46.846 clat percentiles (usec): 00:25:46.846 | 1.00th=[ 1237], 5.00th=[ 2245], 10.00th=[ 7832], 00:25:46.846 | 20.00th=[ 85459], 30.00th=[ 164627], 40.00th=[ 233833], 00:25:46.846 | 50.00th=[ 295699], 60.00th=[ 371196], 70.00th=[ 484443], 00:25:46.846 | 80.00th=[ 624952], 90.00th=[ 792724], 95.00th=[ 943719], 00:25:46.846 | 99.00th=[1115685], 99.50th=[1115685], 99.90th=[1149240], 00:25:46.846 | 99.95th=[1149240], 99.99th=[1149240] 00:25:46.846 bw ( KiB/s): min= 8704, max=133632, per=5.65%, avg=43860.40, stdev=31819.16, samples=20 00:25:46.846 iops : min= 34, max= 522, avg=171.30, stdev=124.26, samples=20 00:25:46.846 lat (msec) : 2=4.50%, 4=2.36%, 10=3.71%, 20=0.96%, 50=5.40% 00:25:46.846 lat (msec) : 100=3.88%, 250=22.23%, 500=27.12%, 750=17.78%, 1000=8.44% 00:25:46.846 lat (msec) : 2000=3.60% 00:25:46.846 cpu : usr=0.07%, sys=0.74%, ctx=496, majf=0, minf=4097 00:25:46.846 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.5% 00:25:46.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.846 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.846 issued rwts: total=1777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.846 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.846 job3: (groupid=0, jobs=1): err= 0: pid=1324709: Sat Dec 7 10:00:13 2024 00:25:46.846 read: IOPS=293, BW=73.5MiB/s (77.0MB/s)(750MiB/10207msec) 00:25:46.846 slat (usec): min=13, max=309746, avg=1187.46, stdev=10609.62 00:25:46.846 clat (usec): min=950, max=1106.8k, avg=216374.16, stdev=233227.84 00:25:46.846 lat (usec): min=981, max=1106.9k, avg=217561.62, stdev=234646.32 00:25:46.846 clat percentiles (usec): 00:25:46.846 | 1.00th=[ 1663], 5.00th=[ 9372], 10.00th=[ 16319], 00:25:46.846 | 20.00th=[ 49546], 30.00th=[ 73925], 40.00th=[ 81265], 00:25:46.846 | 50.00th=[ 123208], 60.00th=[ 202376], 70.00th=[ 246416], 00:25:46.846 | 80.00th=[ 333448], 90.00th=[ 557843], 95.00th=[ 792724], 00:25:46.846 | 99.00th=[ 977273], 99.50th=[1027605], 99.90th=[1082131], 00:25:46.846 | 99.95th=[1082131], 99.99th=[1098908] 00:25:46.846 bw ( KiB/s): min=13312, max=260096, per=9.67%, avg=75136.00, stdev=60576.98, samples=20 00:25:46.846 iops : min= 52, max= 1016, avg=293.50, stdev=236.63, samples=20 00:25:46.846 lat (usec) : 1000=0.03% 00:25:46.846 lat (msec) : 2=1.67%, 4=1.50%, 10=1.93%, 20=7.67%, 50=7.24% 00:25:46.846 lat (msec) : 100=25.94%, 250=24.64%, 500=18.07%, 750=5.40%, 1000=5.34% 00:25:46.846 lat (msec) : 2000=0.57% 00:25:46.846 cpu : usr=0.08%, sys=1.20%, ctx=1077, majf=0, minf=4098 00:25:46.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:46.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.846 issued rwts: total=2999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.846 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.846 job4: (groupid=0, jobs=1): err= 0: pid=1324710: Sat Dec 7 10:00:13 2024 00:25:46.846 read: IOPS=255, BW=63.9MiB/s (67.0MB/s)(651MiB/10180msec) 00:25:46.846 slat (usec): min=17, max=165927, avg=2630.97, stdev=14136.84 00:25:46.846 clat (usec): min=1721, max=936398, avg=247460.56, stdev=267936.39 00:25:46.846 lat (usec): min=1765, max=936453, avg=250091.52, stdev=271031.00 00:25:46.846 clat percentiles (msec): 00:25:46.846 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 8], 20.00th=[ 14], 00:25:46.846 | 30.00th=[ 31], 40.00th=[ 58], 50.00th=[ 128], 60.00th=[ 232], 00:25:46.846 | 70.00th=[ 368], 80.00th=[ 535], 90.00th=[ 701], 95.00th=[ 776], 00:25:46.846 | 99.00th=[ 869], 99.50th=[ 885], 99.90th=[ 911], 99.95th=[ 927], 00:25:46.846 | 99.99th=[ 936] 00:25:46.846 bw ( KiB/s): min=14336, max=257024, per=8.37%, avg=64987.30, stdev=63138.78, samples=20 00:25:46.846 iops : min= 56, max= 1004, avg=253.85, stdev=246.63, samples=20 00:25:46.846 lat (msec) : 2=0.23%, 4=5.19%, 10=8.15%, 20=13.03%, 50=11.68% 00:25:46.846 lat (msec) : 100=8.76%, 250=14.64%, 500=16.64%, 750=15.03%, 1000=6.65% 00:25:46.846 cpu : usr=0.08%, sys=1.12%, ctx=876, majf=0, minf=4097 00:25:46.846 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:25:46.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.846 issued rwts: total=2602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.846 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.846 job5: (groupid=0, jobs=1): err= 0: pid=1324711: Sat Dec 7 10:00:13 2024 00:25:46.846 read: IOPS=252, BW=63.2MiB/s (66.2MB/s)(643MiB/10182msec) 00:25:46.846 slat (usec): min=15, max=493880, avg=3500.68, stdev=16622.40 00:25:46.846 clat (usec): min=1889, max=1190.3k, avg=249574.83, stdev=193419.75 00:25:46.846 lat (usec): min=1945, max=1190.4k, avg=253075.51, stdev=195307.56 00:25:46.846 clat percentiles (msec): 00:25:46.846 | 1.00th=[ 6], 5.00th=[ 78], 10.00th=[ 94], 20.00th=[ 116], 00:25:46.846 | 30.00th=[ 144], 40.00th=[ 167], 50.00th=[ 184], 60.00th=[ 207], 00:25:46.846 | 70.00th=[ 275], 80.00th=[ 330], 90.00th=[ 575], 95.00th=[ 667], 00:25:46.846 | 99.00th=[ 1053], 99.50th=[ 1083], 99.90th=[ 1099], 99.95th=[ 1183], 00:25:46.846 | 99.99th=[ 1183] 00:25:46.846 bw ( KiB/s): min=10752, max=127488, per=8.27%, avg=64204.80, stdev=34080.82, samples=20 00:25:46.846 iops : min= 42, max= 498, avg=250.80, stdev=133.13, samples=20 00:25:46.846 lat (msec) : 2=0.04%, 4=0.08%, 10=2.22%, 50=0.08%, 100=11.20% 00:25:46.846 lat (msec) : 250=54.43%, 500=20.49%, 750=9.14%, 1000=1.01%, 2000=1.32% 00:25:46.846 cpu : usr=0.08%, sys=1.17%, ctx=408, majf=0, minf=4097 00:25:46.846 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:25:46.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.847 issued rwts: total=2572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.847 job6: (groupid=0, jobs=1): err= 0: pid=1324713: Sat Dec 7 10:00:13 2024 00:25:46.847 read: IOPS=179, BW=44.9MiB/s (47.1MB/s)(457MiB/10181msec) 00:25:46.847 slat (usec): min=19, max=248357, avg=5478.00, stdev=20826.74 00:25:46.847 clat (msec): min=31, max=964, avg=350.55, stdev=239.52 00:25:46.847 lat (msec): min=31, max=964, avg=356.02, stdev=243.37 00:25:46.847 clat percentiles (msec): 00:25:46.847 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 58], 20.00th=[ 95], 00:25:46.847 | 30.00th=[ 165], 40.00th=[ 226], 50.00th=[ 330], 60.00th=[ 405], 00:25:46.847 | 70.00th=[ 502], 80.00th=[ 617], 90.00th=[ 693], 95.00th=[ 743], 00:25:46.847 | 99.00th=[ 835], 99.50th=[ 852], 99.90th=[ 927], 99.95th=[ 961], 00:25:46.847 | 99.99th=[ 961] 00:25:46.847 bw ( KiB/s): min=15872, max=146432, per=5.81%, avg=45158.40, stdev=37603.11, samples=20 00:25:46.847 iops : min= 62, max= 572, avg=176.40, stdev=146.89, samples=20 00:25:46.847 lat (msec) : 50=8.04%, 100=12.47%, 250=23.14%, 500=26.31%, 750=25.44% 00:25:46.847 lat (msec) : 1000=4.60% 00:25:46.847 cpu : usr=0.05%, sys=0.84%, ctx=276, majf=0, minf=4097 00:25:46.847 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.6% 00:25:46.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.847 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.847 issued rwts: total=1828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.847 job7: (groupid=0, jobs=1): err= 0: pid=1324714: Sat Dec 7 10:00:13 2024 00:25:46.847 read: IOPS=170, BW=42.6MiB/s (44.6MB/s)(434MiB/10181msec) 00:25:46.847 slat (usec): min=15, max=283926, avg=3806.76, stdev=18341.16 00:25:46.847 clat (usec): min=756, max=916086, avg=371473.78, stdev=259952.80 00:25:46.847 lat (usec): min=817, max=916133, avg=375280.54, stdev=263780.32 00:25:46.847 clat percentiles (msec): 00:25:46.847 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 19], 20.00th=[ 115], 00:25:46.847 | 30.00th=[ 192], 40.00th=[ 255], 50.00th=[ 326], 60.00th=[ 468], 00:25:46.847 | 70.00th=[ 567], 80.00th=[ 676], 90.00th=[ 726], 95.00th=[ 768], 00:25:46.847 | 99.00th=[ 802], 99.50th=[ 818], 99.90th=[ 860], 99.95th=[ 919], 00:25:46.847 | 99.99th=[ 919] 00:25:46.847 bw ( KiB/s): min=12800, max=108544, per=5.51%, avg=42781.10, stdev=25601.88, samples=20 00:25:46.847 iops : min= 50, max= 424, avg=167.10, stdev=100.01, samples=20 00:25:46.847 lat (usec) : 1000=0.40% 00:25:46.847 lat (msec) : 2=0.12%, 4=0.75%, 10=4.44%, 20=4.96%, 50=6.23% 00:25:46.847 lat (msec) : 100=1.96%, 250=19.26%, 500=24.86%, 750=29.47%, 1000=7.55% 00:25:46.847 cpu : usr=0.04%, sys=0.74%, ctx=455, majf=0, minf=4097 00:25:46.847 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:25:46.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.847 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.847 issued rwts: total=1734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.847 job8: (groupid=0, jobs=1): err= 0: pid=1324718: Sat Dec 7 10:00:13 2024 00:25:46.847 read: IOPS=407, BW=102MiB/s (107MB/s)(1039MiB/10197msec) 00:25:46.847 slat (usec): min=13, max=701678, avg=2353.52, stdev=16391.89 00:25:46.847 clat (msec): min=2, max=848, avg=154.44, stdev=161.05 00:25:46.847 lat (msec): min=3, max=1202, avg=156.80, stdev=163.31 00:25:46.847 clat percentiles (msec): 00:25:46.847 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 40], 00:25:46.847 | 30.00th=[ 50], 40.00th=[ 75], 50.00th=[ 95], 60.00th=[ 140], 00:25:46.847 | 70.00th=[ 165], 80.00th=[ 209], 90.00th=[ 334], 95.00th=[ 558], 00:25:46.847 | 99.00th=[ 751], 99.50th=[ 810], 99.90th=[ 810], 99.95th=[ 810], 00:25:46.847 | 99.99th=[ 852] 00:25:46.847 bw ( KiB/s): min=12288, max=337920, per=14.20%, avg=110295.58, stdev=87940.93, samples=19 00:25:46.847 iops : min= 48, max= 1320, avg=430.84, stdev=343.52, samples=19 00:25:46.847 lat (msec) : 4=0.07%, 10=0.05%, 50=30.33%, 100=21.27%, 250=31.27% 00:25:46.847 lat (msec) : 500=11.47%, 750=4.50%, 1000=1.03% 00:25:46.847 cpu : usr=0.15%, sys=1.66%, ctx=635, majf=0, minf=4097 00:25:46.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:46.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.847 issued rwts: total=4157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.847 job9: (groupid=0, jobs=1): err= 0: pid=1324719: Sat Dec 7 10:00:13 2024 00:25:46.847 read: IOPS=150, BW=37.6MiB/s (39.4MB/s)(383MiB/10184msec) 00:25:46.847 slat (usec): min=17, max=266767, avg=4127.07, stdev=20278.99 00:25:46.847 clat (usec): min=1855, max=965482, avg=420832.34, stdev=257251.55 00:25:46.847 lat (usec): min=1908, max=965512, avg=424959.41, stdev=260369.10 00:25:46.847 clat percentiles (msec): 00:25:46.847 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 24], 20.00th=[ 138], 00:25:46.847 | 30.00th=[ 284], 40.00th=[ 363], 50.00th=[ 447], 60.00th=[ 527], 00:25:46.847 | 70.00th=[ 575], 80.00th=[ 676], 90.00th=[ 743], 95.00th=[ 776], 00:25:46.847 | 99.00th=[ 894], 99.50th=[ 944], 99.90th=[ 969], 99.95th=[ 969], 00:25:46.847 | 99.99th=[ 969] 00:25:46.847 bw ( KiB/s): min=19968, max=96448, per=4.84%, avg=37590.40, stdev=20339.69, samples=20 00:25:46.847 iops : min= 78, max= 376, avg=146.80, stdev=79.34, samples=20 00:25:46.847 lat (msec) : 2=0.13%, 4=0.91%, 10=4.96%, 20=1.57%, 50=5.48% 00:25:46.847 lat (msec) : 100=5.35%, 250=8.75%, 500=29.18%, 750=34.86%, 1000=8.81% 00:25:46.847 cpu : usr=0.07%, sys=0.66%, ctx=400, majf=0, minf=4097 00:25:46.847 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:25:46.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.847 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.847 issued rwts: total=1532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.847 job10: (groupid=0, jobs=1): err= 0: pid=1324720: Sat Dec 7 10:00:13 2024 00:25:46.847 read: IOPS=411, BW=103MiB/s (108MB/s)(1048MiB/10188msec) 00:25:46.847 slat (usec): min=14, max=418373, avg=1542.65, stdev=10206.88 00:25:46.847 clat (usec): min=987, max=870395, avg=153775.18, stdev=179035.23 00:25:46.847 lat (usec): min=1073, max=870428, avg=155317.82, stdev=180319.40 00:25:46.847 clat percentiles (msec): 00:25:46.847 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 14], 20.00th=[ 32], 00:25:46.847 | 30.00th=[ 48], 40.00th=[ 66], 50.00th=[ 78], 60.00th=[ 103], 00:25:46.847 | 70.00th=[ 186], 80.00th=[ 245], 90.00th=[ 359], 95.00th=[ 600], 00:25:46.847 | 99.00th=[ 810], 99.50th=[ 835], 99.90th=[ 869], 99.95th=[ 869], 00:25:46.847 | 99.99th=[ 869] 00:25:46.847 bw ( KiB/s): min= 8704, max=273408, per=13.61%, avg=105710.40, stdev=82342.11, samples=20 00:25:46.847 iops : min= 34, max= 1068, avg=412.90, stdev=321.66, samples=20 00:25:46.847 lat (usec) : 1000=0.02% 00:25:46.847 lat (msec) : 2=0.14%, 4=0.57%, 10=4.51%, 20=5.84%, 50=19.94% 00:25:46.847 lat (msec) : 100=28.62%, 250=21.44%, 500=11.50%, 750=5.37%, 1000=2.05% 00:25:46.847 cpu : usr=0.20%, sys=1.64%, ctx=1357, majf=0, minf=4097 00:25:46.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:46.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.847 issued rwts: total=4193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.847 00:25:46.847 Run status group 0 (all jobs): 00:25:46.847 READ: bw=759MiB/s (795MB/s), 37.6MiB/s-111MiB/s (39.4MB/s-116MB/s), io=7743MiB (8119MB), run=10180-10207msec 00:25:46.847 00:25:46.847 Disk stats (read/write): 00:25:46.847 nvme0n1: ios=6028/0, merge=0/0, ticks=1258064/0, in_queue=1258064, util=97.29% 00:25:46.847 nvme10n1: ios=8996/0, merge=0/0, ticks=1257390/0, in_queue=1257390, util=97.52% 00:25:46.847 nvme1n1: ios=3487/0, merge=0/0, ticks=1261471/0, in_queue=1261471, util=97.75% 00:25:46.847 nvme2n1: ios=5928/0, merge=0/0, ticks=1263450/0, in_queue=1263450, util=97.93% 00:25:46.847 nvme3n1: ios=5064/0, merge=0/0, ticks=1214664/0, in_queue=1214664, util=97.90% 00:25:46.847 nvme4n1: ios=4996/0, merge=0/0, ticks=1207208/0, in_queue=1207208, util=98.27% 00:25:46.847 nvme5n1: ios=3472/0, merge=0/0, ticks=1206969/0, in_queue=1206969, util=98.42% 00:25:46.847 nvme6n1: ios=3240/0, merge=0/0, ticks=1218610/0, in_queue=1218610, util=98.55% 00:25:46.847 nvme7n1: ios=8313/0, merge=0/0, ticks=1270068/0, in_queue=1270068, util=98.96% 00:25:46.847 nvme8n1: ios=2922/0, merge=0/0, ticks=1225193/0, in_queue=1225193, util=99.10% 00:25:46.847 nvme9n1: ios=8312/0, merge=0/0, ticks=1248141/0, in_queue=1248141, util=99.28% 00:25:46.847 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:46.847 [global] 00:25:46.847 thread=1 00:25:46.847 invalidate=1 00:25:46.847 rw=randwrite 00:25:46.847 time_based=1 00:25:46.847 runtime=10 00:25:46.847 ioengine=libaio 00:25:46.847 direct=1 00:25:46.847 bs=262144 00:25:46.847 iodepth=64 00:25:46.847 norandommap=1 00:25:46.847 numjobs=1 00:25:46.847 00:25:46.847 [job0] 00:25:46.847 filename=/dev/nvme0n1 00:25:46.847 [job1] 00:25:46.847 filename=/dev/nvme10n1 00:25:46.847 [job2] 00:25:46.847 filename=/dev/nvme1n1 00:25:46.847 [job3] 00:25:46.847 filename=/dev/nvme2n1 00:25:46.847 [job4] 00:25:46.847 filename=/dev/nvme3n1 00:25:46.847 [job5] 00:25:46.847 filename=/dev/nvme4n1 00:25:46.847 [job6] 00:25:46.847 filename=/dev/nvme5n1 00:25:46.847 [job7] 00:25:46.847 filename=/dev/nvme6n1 00:25:46.847 [job8] 00:25:46.847 filename=/dev/nvme7n1 00:25:46.847 [job9] 00:25:46.847 filename=/dev/nvme8n1 00:25:46.847 [job10] 00:25:46.847 filename=/dev/nvme9n1 00:25:46.847 Could not set queue depth (nvme0n1) 00:25:46.847 Could not set queue depth (nvme10n1) 00:25:46.847 Could not set queue depth (nvme1n1) 00:25:46.847 Could not set queue depth (nvme2n1) 00:25:46.848 Could not set queue depth (nvme3n1) 00:25:46.848 Could not set queue depth (nvme4n1) 00:25:46.848 Could not set queue depth (nvme5n1) 00:25:46.848 Could not set queue depth (nvme6n1) 00:25:46.848 Could not set queue depth (nvme7n1) 00:25:46.848 Could not set queue depth (nvme8n1) 00:25:46.848 Could not set queue depth (nvme9n1) 00:25:46.848 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.848 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.848 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.848 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.848 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.848 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.848 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.848 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.848 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.848 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.848 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.848 fio-3.35 00:25:46.848 Starting 11 threads 00:25:56.808 00:25:56.808 job0: (groupid=0, jobs=1): err= 0: pid=1326156: Sat Dec 7 10:00:25 2024 00:25:56.808 write: IOPS=354, BW=88.6MiB/s (92.9MB/s)(901MiB/10172msec); 0 zone resets 00:25:56.808 slat (usec): min=19, max=48364, avg=2300.27, stdev=6384.56 00:25:56.808 clat (usec): min=1383, max=557047, avg=178256.83, stdev=138892.17 00:25:56.808 lat (msec): min=2, max=557, avg=180.56, stdev=140.79 00:25:56.808 clat percentiles (msec): 00:25:56.808 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 26], 20.00th=[ 61], 00:25:56.808 | 30.00th=[ 90], 40.00th=[ 110], 50.00th=[ 127], 60.00th=[ 169], 00:25:56.808 | 70.00th=[ 230], 80.00th=[ 300], 90.00th=[ 409], 95.00th=[ 451], 00:25:56.808 | 99.00th=[ 510], 99.50th=[ 535], 99.90th=[ 550], 99.95th=[ 558], 00:25:56.808 | 99.99th=[ 558] 00:25:56.808 bw ( KiB/s): min=30720, max=237568, per=8.78%, avg=90566.35, stdev=60479.00, samples=20 00:25:56.808 iops : min= 120, max= 928, avg=353.60, stdev=236.16, samples=20 00:25:56.808 lat (msec) : 2=0.06%, 4=0.28%, 10=2.72%, 20=4.80%, 50=8.60% 00:25:56.808 lat (msec) : 100=17.12%, 250=39.84%, 500=25.31%, 750=1.28% 00:25:56.808 cpu : usr=0.82%, sys=1.29%, ctx=1822, majf=0, minf=1 00:25:56.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:56.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.808 issued rwts: total=0,3604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.808 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.808 job1: (groupid=0, jobs=1): err= 0: pid=1326168: Sat Dec 7 10:00:25 2024 00:25:56.808 write: IOPS=385, BW=96.3MiB/s (101MB/s)(979MiB/10163msec); 0 zone resets 00:25:56.808 slat (usec): min=32, max=99454, avg=2319.09, stdev=6238.22 00:25:56.808 clat (msec): min=8, max=610, avg=163.65, stdev=126.34 00:25:56.808 lat (msec): min=8, max=610, avg=165.97, stdev=128.08 00:25:56.808 clat percentiles (msec): 00:25:56.808 | 1.00th=[ 59], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 89], 00:25:56.808 | 30.00th=[ 91], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 117], 00:25:56.808 | 70.00th=[ 150], 80.00th=[ 228], 90.00th=[ 401], 95.00th=[ 481], 00:25:56.808 | 99.00th=[ 542], 99.50th=[ 550], 99.90th=[ 584], 99.95th=[ 609], 00:25:56.808 | 99.99th=[ 609] 00:25:56.808 bw ( KiB/s): min=30720, max=184320, per=9.56%, avg=98568.20, stdev=59614.31, samples=20 00:25:56.808 iops : min= 120, max= 720, avg=384.90, stdev=232.84, samples=20 00:25:56.808 lat (msec) : 10=0.05%, 20=0.38%, 50=0.51%, 100=54.01%, 250=28.35% 00:25:56.808 lat (msec) : 500=14.10%, 750=2.60% 00:25:56.808 cpu : usr=0.91%, sys=1.44%, ctx=1211, majf=0, minf=1 00:25:56.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:56.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.808 issued rwts: total=0,3916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.808 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.808 job2: (groupid=0, jobs=1): err= 0: pid=1326169: Sat Dec 7 10:00:25 2024 00:25:56.808 write: IOPS=226, BW=56.7MiB/s (59.4MB/s)(576MiB/10168msec); 0 zone resets 00:25:56.808 slat (usec): min=25, max=216583, avg=3297.85, stdev=10841.06 00:25:56.808 clat (msec): min=4, max=679, avg=278.83, stdev=177.33 00:25:56.808 lat (msec): min=4, max=679, avg=282.13, stdev=179.76 00:25:56.808 clat percentiles (msec): 00:25:56.808 | 1.00th=[ 10], 5.00th=[ 25], 10.00th=[ 61], 20.00th=[ 99], 00:25:56.808 | 30.00th=[ 136], 40.00th=[ 188], 50.00th=[ 251], 60.00th=[ 363], 00:25:56.808 | 70.00th=[ 401], 80.00th=[ 447], 90.00th=[ 506], 95.00th=[ 609], 00:25:56.808 | 99.00th=[ 659], 99.50th=[ 659], 99.90th=[ 667], 99.95th=[ 676], 00:25:56.808 | 99.99th=[ 676] 00:25:56.808 bw ( KiB/s): min=20992, max=183808, per=5.56%, avg=57375.40, stdev=40168.14, samples=20 00:25:56.808 iops : min= 82, max= 718, avg=224.00, stdev=156.97, samples=20 00:25:56.808 lat (msec) : 10=1.04%, 20=2.65%, 50=4.64%, 100=12.10%, 250=29.54% 00:25:56.808 lat (msec) : 500=39.78%, 750=10.24% 00:25:56.808 cpu : usr=0.56%, sys=0.77%, ctx=1253, majf=0, minf=1 00:25:56.808 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:25:56.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.808 issued rwts: total=0,2305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.808 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.808 job3: (groupid=0, jobs=1): err= 0: pid=1326170: Sat Dec 7 10:00:25 2024 00:25:56.808 write: IOPS=247, BW=61.8MiB/s (64.8MB/s)(629MiB/10168msec); 0 zone resets 00:25:56.808 slat (usec): min=25, max=128844, avg=3206.73, stdev=8904.24 00:25:56.808 clat (usec): min=883, max=647530, avg=255525.42, stdev=170977.04 00:25:56.808 lat (usec): min=937, max=647585, avg=258732.16, stdev=172868.34 00:25:56.808 clat percentiles (msec): 00:25:56.808 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 104], 00:25:56.808 | 30.00th=[ 117], 40.00th=[ 157], 50.00th=[ 218], 60.00th=[ 288], 00:25:56.808 | 70.00th=[ 393], 80.00th=[ 422], 90.00th=[ 502], 95.00th=[ 550], 00:25:56.808 | 99.00th=[ 600], 99.50th=[ 617], 99.90th=[ 642], 99.95th=[ 642], 00:25:56.808 | 99.99th=[ 651] 00:25:56.808 bw ( KiB/s): min=28672, max=184689, per=6.08%, avg=62743.60, stdev=42337.39, samples=20 00:25:56.808 iops : min= 112, max= 721, avg=244.95, stdev=165.36, samples=20 00:25:56.808 lat (usec) : 1000=0.08% 00:25:56.808 lat (msec) : 2=0.28%, 4=0.28%, 10=0.72%, 20=2.82%, 50=6.25% 00:25:56.808 lat (msec) : 100=7.36%, 250=36.63%, 500=35.04%, 750=10.54% 00:25:56.808 cpu : usr=0.53%, sys=0.83%, ctx=1201, majf=0, minf=1 00:25:56.808 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:25:56.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.808 issued rwts: total=0,2514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.808 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.808 job4: (groupid=0, jobs=1): err= 0: pid=1326171: Sat Dec 7 10:00:25 2024 00:25:56.808 write: IOPS=377, BW=94.5MiB/s (99.1MB/s)(960MiB/10163msec); 0 zone resets 00:25:56.808 slat (usec): min=26, max=246974, avg=2168.15, stdev=7982.80 00:25:56.808 clat (usec): min=1448, max=720501, avg=167049.26, stdev=130949.94 00:25:56.808 lat (usec): min=1525, max=720546, avg=169217.40, stdev=132408.30 00:25:56.808 clat percentiles (msec): 00:25:56.808 | 1.00th=[ 29], 5.00th=[ 72], 10.00th=[ 85], 20.00th=[ 87], 00:25:56.808 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 113], 00:25:56.808 | 70.00th=[ 176], 80.00th=[ 243], 90.00th=[ 409], 95.00th=[ 464], 00:25:56.808 | 99.00th=[ 542], 99.50th=[ 575], 99.90th=[ 709], 99.95th=[ 718], 00:25:56.808 | 99.99th=[ 718] 00:25:56.808 bw ( KiB/s): min=23504, max=203264, per=9.37%, avg=96657.80, stdev=62716.59, samples=20 00:25:56.808 iops : min= 91, max= 794, avg=377.45, stdev=245.04, samples=20 00:25:56.808 lat (msec) : 2=0.03%, 4=0.23%, 10=0.39%, 20=0.08%, 50=1.28% 00:25:56.808 lat (msec) : 100=55.71%, 250=24.24%, 500=15.05%, 750=2.99% 00:25:56.808 cpu : usr=0.93%, sys=1.02%, ctx=1387, majf=0, minf=1 00:25:56.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:56.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.808 issued rwts: total=0,3841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.808 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.808 job5: (groupid=0, jobs=1): err= 0: pid=1326172: Sat Dec 7 10:00:25 2024 00:25:56.808 write: IOPS=270, BW=67.6MiB/s (70.8MB/s)(687MiB/10162msec); 0 zone resets 00:25:56.808 slat (usec): min=27, max=211814, avg=2603.26, stdev=8886.02 00:25:56.808 clat (usec): min=965, max=629513, avg=233659.26, stdev=179280.88 00:25:56.808 lat (usec): min=1021, max=629565, avg=236262.52, stdev=181548.05 00:25:56.808 clat percentiles (msec): 00:25:56.808 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 29], 20.00th=[ 49], 00:25:56.809 | 30.00th=[ 83], 40.00th=[ 114], 50.00th=[ 188], 60.00th=[ 300], 00:25:56.809 | 70.00th=[ 388], 80.00th=[ 414], 90.00th=[ 464], 95.00th=[ 531], 00:25:56.809 | 99.00th=[ 600], 99.50th=[ 625], 99.90th=[ 634], 99.95th=[ 634], 00:25:56.809 | 99.99th=[ 634] 00:25:56.809 bw ( KiB/s): min=24576, max=189440, per=6.65%, avg=68632.45, stdev=50663.14, samples=20 00:25:56.809 iops : min= 96, max= 740, avg=267.90, stdev=197.92, samples=20 00:25:56.809 lat (usec) : 1000=0.04% 00:25:56.809 lat (msec) : 2=0.51%, 4=1.09%, 10=2.95%, 20=3.10%, 50=12.86% 00:25:56.809 lat (msec) : 100=13.15%, 250=21.70%, 500=37.51%, 750=7.10% 00:25:56.809 cpu : usr=0.84%, sys=0.94%, ctx=1651, majf=0, minf=1 00:25:56.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:25:56.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.809 issued rwts: total=0,2746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.809 job6: (groupid=0, jobs=1): err= 0: pid=1326173: Sat Dec 7 10:00:25 2024 00:25:56.809 write: IOPS=479, BW=120MiB/s (126MB/s)(1217MiB/10145msec); 0 zone resets 00:25:56.809 slat (usec): min=25, max=195501, avg=1188.65, stdev=5847.79 00:25:56.809 clat (usec): min=882, max=670387, avg=132142.29, stdev=146856.14 00:25:56.809 lat (usec): min=923, max=675434, avg=133330.95, stdev=148418.02 00:25:56.809 clat percentiles (msec): 00:25:56.809 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 18], 00:25:56.809 | 30.00th=[ 36], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 101], 00:25:56.809 | 70.00th=[ 133], 80.00th=[ 205], 90.00th=[ 422], 95.00th=[ 451], 00:25:56.809 | 99.00th=[ 592], 99.50th=[ 642], 99.90th=[ 667], 99.95th=[ 667], 00:25:56.809 | 99.99th=[ 667] 00:25:56.809 bw ( KiB/s): min=36790, max=284614, per=11.91%, avg=122900.95, stdev=77347.15, samples=20 00:25:56.809 iops : min= 143, max= 1111, avg=479.90, stdev=302.10, samples=20 00:25:56.809 lat (usec) : 1000=0.02% 00:25:56.809 lat (msec) : 2=0.62%, 4=2.06%, 10=8.88%, 20=10.36%, 50=12.84% 00:25:56.809 lat (msec) : 100=25.09%, 250=24.46%, 500=12.68%, 750=3.00% 00:25:56.809 cpu : usr=0.89%, sys=1.74%, ctx=3443, majf=0, minf=1 00:25:56.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:56.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.809 issued rwts: total=0,4866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.809 job7: (groupid=0, jobs=1): err= 0: pid=1326174: Sat Dec 7 10:00:25 2024 00:25:56.809 write: IOPS=663, BW=166MiB/s (174MB/s)(1668MiB/10051msec); 0 zone resets 00:25:56.809 slat (usec): min=17, max=142176, avg=1189.20, stdev=3348.80 00:25:56.809 clat (usec): min=1288, max=616073, avg=95181.52, stdev=70761.75 00:25:56.809 lat (usec): min=1351, max=620002, avg=96370.72, stdev=71336.90 00:25:56.809 clat percentiles (msec): 00:25:56.809 | 1.00th=[ 5], 5.00th=[ 20], 10.00th=[ 40], 20.00th=[ 47], 00:25:56.809 | 30.00th=[ 59], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 96], 00:25:56.809 | 70.00th=[ 113], 80.00th=[ 120], 90.00th=[ 148], 95.00th=[ 220], 00:25:56.809 | 99.00th=[ 409], 99.50th=[ 493], 99.90th=[ 600], 99.95th=[ 609], 00:25:56.809 | 99.99th=[ 617] 00:25:56.809 bw ( KiB/s): min=30658, max=355527, per=16.40%, avg=169198.10, stdev=80962.47, samples=20 00:25:56.809 iops : min= 119, max= 1388, avg=660.80, stdev=316.26, samples=20 00:25:56.809 lat (msec) : 2=0.07%, 4=0.70%, 10=1.83%, 20=2.47%, 50=20.43% 00:25:56.809 lat (msec) : 100=36.18%, 250=35.17%, 500=2.68%, 750=0.46% 00:25:56.809 cpu : usr=1.61%, sys=2.10%, ctx=2745, majf=0, minf=1 00:25:56.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:56.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.809 issued rwts: total=0,6673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.809 job8: (groupid=0, jobs=1): err= 0: pid=1326175: Sat Dec 7 10:00:25 2024 00:25:56.809 write: IOPS=272, BW=68.1MiB/s (71.4MB/s)(691MiB/10153msec); 0 zone resets 00:25:56.809 slat (usec): min=25, max=180602, avg=2434.29, stdev=8272.46 00:25:56.809 clat (usec): min=767, max=662285, avg=232561.91, stdev=180073.94 00:25:56.809 lat (usec): min=812, max=667578, avg=234996.20, stdev=182192.18 00:25:56.809 clat percentiles (msec): 00:25:56.809 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 39], 20.00th=[ 46], 00:25:56.809 | 30.00th=[ 62], 40.00th=[ 105], 50.00th=[ 205], 60.00th=[ 288], 00:25:56.809 | 70.00th=[ 388], 80.00th=[ 418], 90.00th=[ 477], 95.00th=[ 510], 00:25:56.809 | 99.00th=[ 634], 99.50th=[ 642], 99.90th=[ 659], 99.95th=[ 659], 00:25:56.809 | 99.99th=[ 659] 00:25:56.809 bw ( KiB/s): min=30658, max=280576, per=6.70%, avg=69126.00, stdev=57920.03, samples=20 00:25:56.809 iops : min= 119, max= 1096, avg=269.85, stdev=226.33, samples=20 00:25:56.809 lat (usec) : 1000=0.14% 00:25:56.809 lat (msec) : 2=0.76%, 4=1.19%, 10=2.42%, 20=1.37%, 50=20.33% 00:25:56.809 lat (msec) : 100=13.31%, 250=16.14%, 500=37.88%, 750=6.44% 00:25:56.809 cpu : usr=0.54%, sys=0.98%, ctx=1520, majf=0, minf=1 00:25:56.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:25:56.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.809 issued rwts: total=0,2764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.809 job9: (groupid=0, jobs=1): err= 0: pid=1326180: Sat Dec 7 10:00:25 2024 00:25:56.809 write: IOPS=257, BW=64.5MiB/s (67.6MB/s)(654MiB/10142msec); 0 zone resets 00:25:56.809 slat (usec): min=23, max=197740, avg=2702.34, stdev=9682.50 00:25:56.809 clat (usec): min=834, max=664106, avg=245327.62, stdev=183606.74 00:25:56.809 lat (usec): min=876, max=664198, avg=248029.97, stdev=185728.36 00:25:56.809 clat percentiles (usec): 00:25:56.809 | 1.00th=[ 1811], 5.00th=[ 15664], 10.00th=[ 26084], 20.00th=[ 46924], 00:25:56.809 | 30.00th=[ 90702], 40.00th=[143655], 50.00th=[206570], 60.00th=[316670], 00:25:56.809 | 70.00th=[400557], 80.00th=[429917], 90.00th=[484443], 95.00th=[541066], 00:25:56.809 | 99.00th=[624952], 99.50th=[633340], 99.90th=[641729], 99.95th=[641729], 00:25:56.809 | 99.99th=[666895] 00:25:56.809 bw ( KiB/s): min=24576, max=192638, per=6.33%, avg=65326.85, stdev=45806.91, samples=20 00:25:56.809 iops : min= 96, max= 752, avg=255.05, stdev=178.91, samples=20 00:25:56.809 lat (usec) : 1000=0.34% 00:25:56.809 lat (msec) : 2=0.84%, 4=0.54%, 10=1.34%, 20=3.52%, 50=14.18% 00:25:56.809 lat (msec) : 100=11.31%, 250=22.52%, 500=37.39%, 750=8.03% 00:25:56.809 cpu : usr=0.65%, sys=0.92%, ctx=1571, majf=0, minf=1 00:25:56.809 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:25:56.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.809 issued rwts: total=0,2616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.809 job10: (groupid=0, jobs=1): err= 0: pid=1326184: Sat Dec 7 10:00:25 2024 00:25:56.809 write: IOPS=506, BW=127MiB/s (133MB/s)(1285MiB/10144msec); 0 zone resets 00:25:56.809 slat (usec): min=18, max=90513, avg=963.24, stdev=4109.34 00:25:56.809 clat (usec): min=721, max=602898, avg=125262.72, stdev=122036.17 00:25:56.809 lat (usec): min=761, max=602950, avg=126225.96, stdev=122968.18 00:25:56.809 clat percentiles (usec): 00:25:56.809 | 1.00th=[ 1516], 5.00th=[ 4146], 10.00th=[ 8979], 20.00th=[ 34866], 00:25:56.809 | 30.00th=[ 51643], 40.00th=[ 73925], 50.00th=[110625], 60.00th=[117965], 00:25:56.809 | 70.00th=[124257], 80.00th=[179307], 90.00th=[274727], 95.00th=[450888], 00:25:56.809 | 99.00th=[541066], 99.50th=[557843], 99.90th=[566232], 99.95th=[591397], 00:25:56.809 | 99.99th=[599786] 00:25:56.809 bw ( KiB/s): min=34816, max=278016, per=12.60%, avg=129947.65, stdev=69650.27, samples=20 00:25:56.809 iops : min= 136, max= 1086, avg=507.45, stdev=272.07, samples=20 00:25:56.809 lat (usec) : 750=0.04%, 1000=0.25% 00:25:56.809 lat (msec) : 2=1.30%, 4=3.19%, 10=5.97%, 20=4.82%, 50=13.65% 00:25:56.809 lat (msec) : 100=17.70%, 250=41.70%, 500=9.30%, 750=2.06% 00:25:56.809 cpu : usr=1.12%, sys=1.63%, ctx=3506, majf=0, minf=1 00:25:56.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:56.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.809 issued rwts: total=0,5141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.809 00:25:56.809 Run status group 0 (all jobs): 00:25:56.809 WRITE: bw=1007MiB/s (1056MB/s), 56.7MiB/s-166MiB/s (59.4MB/s-174MB/s), io=10.0GiB (10.7GB), run=10051-10172msec 00:25:56.809 00:25:56.809 Disk stats (read/write): 00:25:56.809 nvme0n1: ios=49/7154, merge=0/0, ticks=521/1234715, in_queue=1235236, util=99.20% 00:25:56.809 nvme10n1: ios=46/7788, merge=0/0, ticks=2434/1220079, in_queue=1222513, util=100.00% 00:25:56.809 nvme1n1: ios=44/4561, merge=0/0, ticks=3260/1229133, in_queue=1232393, util=99.93% 00:25:56.809 nvme2n1: ios=0/4980, merge=0/0, ticks=0/1234056, in_queue=1234056, util=95.79% 00:25:56.809 nvme3n1: ios=42/7638, merge=0/0, ticks=4289/1180629, in_queue=1184918, util=99.99% 00:25:56.809 nvme4n1: ios=41/5451, merge=0/0, ticks=1723/1235997, in_queue=1237720, util=100.00% 00:25:56.809 nvme5n1: ios=33/9714, merge=0/0, ticks=965/1249394, in_queue=1250359, util=99.99% 00:25:56.809 nvme6n1: ios=0/12932, merge=0/0, ticks=0/1214004, in_queue=1214004, util=97.38% 00:25:56.809 nvme7n1: ios=0/5501, merge=0/0, ticks=0/1241899, in_queue=1241899, util=98.40% 00:25:56.809 nvme8n1: ios=26/5216, merge=0/0, ticks=159/1241387, in_queue=1241546, util=99.37% 00:25:56.809 nvme9n1: ios=0/10265, merge=0/0, ticks=0/1252552, in_queue=1252552, util=99.03% 00:25:56.809 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:56.809 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:56.809 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.809 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:57.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.067 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:57.632 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.632 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:57.889 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:57.889 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.889 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:58.147 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.147 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:58.404 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:58.404 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:58.404 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.404 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.404 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:58.404 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.404 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:58.404 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.405 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:58.405 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.405 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.405 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.405 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.405 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:58.673 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.673 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:58.932 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:58.932 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.932 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:59.232 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:59.232 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.232 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:59.490 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:59.490 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:59.490 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:59.490 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:25:59.490 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:59.490 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:25:59.490 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:59.490 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:59.490 rmmod nvme_tcp 00:25:59.490 rmmod nvme_fabrics 00:25:59.490 rmmod nvme_keyring 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 1318134 ']' 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 1318134 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 1318134 ']' 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 1318134 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1318134 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1318134' 00:25:59.490 killing process with pid 1318134 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 1318134 00:25:59.490 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 1318134 00:26:00.057 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:00.057 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:00.057 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:00.057 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:00.057 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:26:00.057 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:00.057 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:26:00.057 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:00.057 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:00.057 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.057 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.057 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.959 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:01.959 00:26:01.959 real 1m10.383s 00:26:01.959 user 4m14.520s 00:26:01.959 sys 0m16.997s 00:26:01.959 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:01.959 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:01.959 ************************************ 00:26:01.959 END TEST nvmf_multiconnection 00:26:01.959 ************************************ 00:26:01.959 10:00:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:01.959 10:00:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:01.959 10:00:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:01.959 10:00:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:01.959 ************************************ 00:26:01.959 START TEST nvmf_initiator_timeout 00:26:01.959 ************************************ 00:26:01.959 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:02.240 * Looking for test storage... 00:26:02.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:02.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.240 --rc genhtml_branch_coverage=1 00:26:02.240 --rc genhtml_function_coverage=1 00:26:02.240 --rc genhtml_legend=1 00:26:02.240 --rc geninfo_all_blocks=1 00:26:02.240 --rc geninfo_unexecuted_blocks=1 00:26:02.240 00:26:02.240 ' 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:02.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.240 --rc genhtml_branch_coverage=1 00:26:02.240 --rc genhtml_function_coverage=1 00:26:02.240 --rc genhtml_legend=1 00:26:02.240 --rc geninfo_all_blocks=1 00:26:02.240 --rc geninfo_unexecuted_blocks=1 00:26:02.240 00:26:02.240 ' 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:02.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.240 --rc genhtml_branch_coverage=1 00:26:02.240 --rc genhtml_function_coverage=1 00:26:02.240 --rc genhtml_legend=1 00:26:02.240 --rc geninfo_all_blocks=1 00:26:02.240 --rc geninfo_unexecuted_blocks=1 00:26:02.240 00:26:02.240 ' 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:02.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.240 --rc genhtml_branch_coverage=1 00:26:02.240 --rc genhtml_function_coverage=1 00:26:02.240 --rc genhtml_legend=1 00:26:02.240 --rc geninfo_all_blocks=1 00:26:02.240 --rc geninfo_unexecuted_blocks=1 00:26:02.240 00:26:02.240 ' 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.240 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.241 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.514 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.514 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:07.514 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:07.514 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:07.514 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:07.515 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:07.515 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:07.515 Found net devices under 0000:86:00.0: cvl_0_0 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:07.515 Found net devices under 0000:86:00.1: cvl_0_1 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # is_hw=yes 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.515 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:07.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:26:07.775 00:26:07.775 --- 10.0.0.2 ping statistics --- 00:26:07.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.775 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:26:07.775 00:26:07.775 --- 10.0.0.1 ping statistics --- 00:26:07.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.775 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # return 0 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=1331481 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 1331481 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 1331481 ']' 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:07.775 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.775 [2024-12-07 10:00:36.446633] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:07.775 [2024-12-07 10:00:36.446678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.035 [2024-12-07 10:00:36.507318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:08.035 [2024-12-07 10:00:36.550066] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.035 [2024-12-07 10:00:36.550108] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.035 [2024-12-07 10:00:36.550116] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.035 [2024-12-07 10:00:36.550123] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.035 [2024-12-07 10:00:36.550128] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.035 [2024-12-07 10:00:36.550182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.035 [2024-12-07 10:00:36.550204] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.035 [2024-12-07 10:00:36.550273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:08.035 [2024-12-07 10:00:36.550274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.035 Malloc0 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.035 Delay0 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.035 [2024-12-07 10:00:36.736122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.035 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.294 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.294 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.294 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.294 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.294 [2024-12-07 10:00:36.768382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.294 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.294 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:09.231 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:09.231 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:09.231 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.231 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:09.231 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:11.136 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:11.418 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:11.418 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:11.418 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:11.418 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:11.418 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:11.418 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1332085 00:26:11.418 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:11.418 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:11.418 [global] 00:26:11.418 thread=1 00:26:11.418 invalidate=1 00:26:11.418 rw=write 00:26:11.418 time_based=1 00:26:11.418 runtime=60 00:26:11.418 ioengine=libaio 00:26:11.418 direct=1 00:26:11.418 bs=4096 00:26:11.418 iodepth=1 00:26:11.418 norandommap=0 00:26:11.418 numjobs=1 00:26:11.418 00:26:11.418 verify_dump=1 00:26:11.418 verify_backlog=512 00:26:11.418 verify_state_save=0 00:26:11.418 do_verify=1 00:26:11.418 verify=crc32c-intel 00:26:11.418 [job0] 00:26:11.418 filename=/dev/nvme0n1 00:26:11.418 Could not set queue depth (nvme0n1) 00:26:11.677 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:11.677 fio-3.35 00:26:11.677 Starting 1 thread 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.195 true 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.195 true 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.195 true 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.195 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.452 true 00:26:14.452 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.452 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.726 true 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.726 true 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.726 true 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.726 true 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:17.726 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1332085 00:27:13.920 00:27:13.920 job0: (groupid=0, jobs=1): err= 0: pid=1332212: Sat Dec 7 10:01:40 2024 00:27:13.920 read: IOPS=265, BW=1061KiB/s (1086kB/s)(62.2MiB/60033msec) 00:27:13.920 slat (usec): min=6, max=14415, avg= 9.35, stdev=151.30 00:27:13.920 clat (usec): min=243, max=41753k, avg=3538.25, stdev=330957.21 00:27:13.920 lat (usec): min=252, max=41753k, avg=3547.61, stdev=330957.38 00:27:13.920 clat percentiles (usec): 00:27:13.920 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:27:13.920 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:27:13.920 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 441], 00:27:13.920 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:27:13.920 | 99.99th=[42730] 00:27:13.920 write: IOPS=272, BW=1092KiB/s (1118kB/s)(64.0MiB/60033msec); 0 zone resets 00:27:13.920 slat (nsec): min=9412, max=40810, avg=10625.39, stdev=1201.05 00:27:13.920 clat (usec): min=160, max=380, avg=202.37, stdev=18.22 00:27:13.920 lat (usec): min=171, max=390, avg=212.99, stdev=18.28 00:27:13.920 clat percentiles (usec): 00:27:13.920 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:27:13.920 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:27:13.920 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 239], 00:27:13.920 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 314], 99.95th=[ 326], 00:27:13.920 | 99.99th=[ 371] 00:27:13.920 bw ( KiB/s): min= 72, max= 8192, per=100.00%, avg=5957.82, stdev=2321.47, samples=22 00:27:13.920 iops : min= 18, max= 2048, avg=1489.45, stdev=580.37, samples=22 00:27:13.920 lat (usec) : 250=49.74%, 500=49.48%, 750=0.04% 00:27:13.920 lat (msec) : 50=0.74%, >=2000=0.01% 00:27:13.920 cpu : usr=0.27%, sys=0.51%, ctx=32306, majf=0, minf=1 00:27:13.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:13.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.920 issued rwts: total=15919,16384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:13.920 00:27:13.920 Run status group 0 (all jobs): 00:27:13.920 READ: bw=1061KiB/s (1086kB/s), 1061KiB/s-1061KiB/s (1086kB/s-1086kB/s), io=62.2MiB (65.2MB), run=60033-60033msec 00:27:13.920 WRITE: bw=1092KiB/s (1118kB/s), 1092KiB/s-1092KiB/s (1118kB/s-1118kB/s), io=64.0MiB (67.1MB), run=60033-60033msec 00:27:13.920 00:27:13.920 Disk stats (read/write): 00:27:13.920 nvme0n1: ios=16014/16384, merge=0/0, ticks=14400/3223, in_queue=17623, util=99.57% 00:27:13.920 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:13.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:13.920 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:13.920 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:13.920 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:13.920 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:13.920 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:13.921 nvmf hotplug test: fio successful as expected 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.921 rmmod nvme_tcp 00:27:13.921 rmmod nvme_fabrics 00:27:13.921 rmmod nvme_keyring 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 1331481 ']' 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 1331481 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 1331481 ']' 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 1331481 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1331481 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1331481' 00:27:13.921 killing process with pid 1331481 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 1331481 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 1331481 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.921 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.180 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:14.438 00:27:14.438 real 1m12.291s 00:27:14.438 user 4m21.325s 00:27:14.438 sys 0m6.844s 00:27:14.438 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:14.438 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.438 ************************************ 00:27:14.438 END TEST nvmf_initiator_timeout 00:27:14.438 ************************************ 00:27:14.438 10:01:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:14.438 10:01:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:14.438 10:01:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:14.438 10:01:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:14.438 10:01:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:19.709 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:19.709 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:19.709 Found net devices under 0000:86:00.0: cvl_0_0 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:19.709 Found net devices under 0000:86:00.1: cvl_0_1 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:19.709 ************************************ 00:27:19.709 START TEST nvmf_perf_adq 00:27:19.709 ************************************ 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:19.709 * Looking for test storage... 00:27:19.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.709 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:19.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.710 --rc genhtml_branch_coverage=1 00:27:19.710 --rc genhtml_function_coverage=1 00:27:19.710 --rc genhtml_legend=1 00:27:19.710 --rc geninfo_all_blocks=1 00:27:19.710 --rc geninfo_unexecuted_blocks=1 00:27:19.710 00:27:19.710 ' 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:19.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.710 --rc genhtml_branch_coverage=1 00:27:19.710 --rc genhtml_function_coverage=1 00:27:19.710 --rc genhtml_legend=1 00:27:19.710 --rc geninfo_all_blocks=1 00:27:19.710 --rc geninfo_unexecuted_blocks=1 00:27:19.710 00:27:19.710 ' 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:19.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.710 --rc genhtml_branch_coverage=1 00:27:19.710 --rc genhtml_function_coverage=1 00:27:19.710 --rc genhtml_legend=1 00:27:19.710 --rc geninfo_all_blocks=1 00:27:19.710 --rc geninfo_unexecuted_blocks=1 00:27:19.710 00:27:19.710 ' 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:19.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.710 --rc genhtml_branch_coverage=1 00:27:19.710 --rc genhtml_function_coverage=1 00:27:19.710 --rc genhtml_legend=1 00:27:19.710 --rc geninfo_all_blocks=1 00:27:19.710 --rc geninfo_unexecuted_blocks=1 00:27:19.710 00:27:19.710 ' 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:19.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:19.710 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.970 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:24.971 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:24.971 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:24.971 Found net devices under 0000:86:00.0: cvl_0_0 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:24.971 Found net devices under 0000:86:00.1: cvl_0_1 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:24.971 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:26.350 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:28.255 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:33.525 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:33.526 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:33.526 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:33.526 Found net devices under 0000:86:00.0: cvl_0_0 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:33.526 Found net devices under 0000:86:00.1: cvl_0_1 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:33.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:27:33.526 00:27:33.526 --- 10.0.0.2 ping statistics --- 00:27:33.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.526 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:27:33.526 00:27:33.526 --- 10.0.0.1 ping statistics --- 00:27:33.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.526 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=1349769 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 1349769 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1349769 ']' 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.526 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.526 [2024-12-07 10:02:01.948319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:33.526 [2024-12-07 10:02:01.948362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.526 [2024-12-07 10:02:02.005837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:33.526 [2024-12-07 10:02:02.048650] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.526 [2024-12-07 10:02:02.048691] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.526 [2024-12-07 10:02:02.048700] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.527 [2024-12-07 10:02:02.048706] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.527 [2024-12-07 10:02:02.048712] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.527 [2024-12-07 10:02:02.048760] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.527 [2024-12-07 10:02:02.048809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.527 [2024-12-07 10:02:02.048782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.527 [2024-12-07 10:02:02.048810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.527 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.786 [2024-12-07 10:02:02.289454] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.786 Malloc1 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.786 [2024-12-07 10:02:02.343996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1349802 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:33.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:35.691 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:35.691 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.691 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.691 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.691 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:35.691 "tick_rate": 2300000000, 00:27:35.691 "poll_groups": [ 00:27:35.691 { 00:27:35.691 "name": "nvmf_tgt_poll_group_000", 00:27:35.691 "admin_qpairs": 1, 00:27:35.691 "io_qpairs": 1, 00:27:35.691 "current_admin_qpairs": 1, 00:27:35.691 "current_io_qpairs": 1, 00:27:35.691 "pending_bdev_io": 0, 00:27:35.691 "completed_nvme_io": 20148, 00:27:35.691 "transports": [ 00:27:35.691 { 00:27:35.691 "trtype": "TCP" 00:27:35.691 } 00:27:35.691 ] 00:27:35.691 }, 00:27:35.691 { 00:27:35.691 "name": "nvmf_tgt_poll_group_001", 00:27:35.691 "admin_qpairs": 0, 00:27:35.691 "io_qpairs": 1, 00:27:35.691 "current_admin_qpairs": 0, 00:27:35.691 "current_io_qpairs": 1, 00:27:35.691 "pending_bdev_io": 0, 00:27:35.691 "completed_nvme_io": 20703, 00:27:35.691 "transports": [ 00:27:35.691 { 00:27:35.691 "trtype": "TCP" 00:27:35.691 } 00:27:35.691 ] 00:27:35.691 }, 00:27:35.691 { 00:27:35.691 "name": "nvmf_tgt_poll_group_002", 00:27:35.691 "admin_qpairs": 0, 00:27:35.691 "io_qpairs": 1, 00:27:35.691 "current_admin_qpairs": 0, 00:27:35.691 "current_io_qpairs": 1, 00:27:35.691 "pending_bdev_io": 0, 00:27:35.691 "completed_nvme_io": 20374, 00:27:35.691 "transports": [ 00:27:35.691 { 00:27:35.691 "trtype": "TCP" 00:27:35.691 } 00:27:35.691 ] 00:27:35.691 }, 00:27:35.691 { 00:27:35.691 "name": "nvmf_tgt_poll_group_003", 00:27:35.691 "admin_qpairs": 0, 00:27:35.691 "io_qpairs": 1, 00:27:35.691 "current_admin_qpairs": 0, 00:27:35.691 "current_io_qpairs": 1, 00:27:35.691 "pending_bdev_io": 0, 00:27:35.691 "completed_nvme_io": 20165, 00:27:35.692 "transports": [ 00:27:35.692 { 00:27:35.692 "trtype": "TCP" 00:27:35.692 } 00:27:35.692 ] 00:27:35.692 } 00:27:35.692 ] 00:27:35.692 }' 00:27:35.692 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:35.692 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:35.950 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:35.951 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:35.951 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1349802 00:27:44.247 Initializing NVMe Controllers 00:27:44.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:44.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:44.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:44.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:44.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:44.247 Initialization complete. Launching workers. 00:27:44.247 ======================================================== 00:27:44.247 Latency(us) 00:27:44.247 Device Information : IOPS MiB/s Average min max 00:27:44.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10383.70 40.56 6162.76 2341.37 10174.19 00:27:44.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10639.20 41.56 6015.68 2247.19 10400.02 00:27:44.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10557.20 41.24 6062.90 2115.45 10290.18 00:27:44.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10447.20 40.81 6125.55 2089.91 10201.41 00:27:44.247 ======================================================== 00:27:44.247 Total : 42027.30 164.17 6091.19 2089.91 10400.02 00:27:44.247 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:44.247 rmmod nvme_tcp 00:27:44.247 rmmod nvme_fabrics 00:27:44.247 rmmod nvme_keyring 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 1349769 ']' 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 1349769 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1349769 ']' 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1349769 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1349769 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1349769' 00:27:44.247 killing process with pid 1349769 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1349769 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1349769 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.247 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.149 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:46.149 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:46.149 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:46.149 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:47.522 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:49.425 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:54.698 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:54.698 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:54.698 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.698 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:54.699 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:54.699 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:54.699 Found net devices under 0000:86:00.0: cvl_0_0 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:54.699 Found net devices under 0000:86:00.1: cvl_0_1 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.699 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.699 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.699 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.699 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:54.699 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.699 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.699 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.699 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:54.699 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:54.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:27:54.700 00:27:54.700 --- 10.0.0.2 ping statistics --- 00:27:54.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.700 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:27:54.700 00:27:54.700 --- 10.0.0.1 ping statistics --- 00:27:54.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.700 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:54.700 net.core.busy_poll = 1 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:54.700 net.core.busy_read = 1 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=1353595 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 1353595 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1353595 ']' 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:54.700 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.960 [2024-12-07 10:02:23.435704] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:54.960 [2024-12-07 10:02:23.435752] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.960 [2024-12-07 10:02:23.495138] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:54.960 [2024-12-07 10:02:23.537168] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.960 [2024-12-07 10:02:23.537205] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.960 [2024-12-07 10:02:23.537212] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.960 [2024-12-07 10:02:23.537218] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.960 [2024-12-07 10:02:23.537223] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.960 [2024-12-07 10:02:23.537322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.960 [2024-12-07 10:02:23.537416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.960 [2024-12-07 10:02:23.537447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.960 [2024-12-07 10:02:23.537448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.960 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.961 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.220 [2024-12-07 10:02:23.760714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.220 Malloc1 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.220 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.221 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.221 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.221 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.221 [2024-12-07 10:02:23.810925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.221 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.221 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1353631 00:27:55.221 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:55.221 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:57.121 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:27:57.121 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.121 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.121 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.121 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:27:57.121 "tick_rate": 2300000000, 00:27:57.121 "poll_groups": [ 00:27:57.121 { 00:27:57.121 "name": "nvmf_tgt_poll_group_000", 00:27:57.121 "admin_qpairs": 1, 00:27:57.121 "io_qpairs": 2, 00:27:57.121 "current_admin_qpairs": 1, 00:27:57.121 "current_io_qpairs": 2, 00:27:57.121 "pending_bdev_io": 0, 00:27:57.121 "completed_nvme_io": 28007, 00:27:57.121 "transports": [ 00:27:57.121 { 00:27:57.121 "trtype": "TCP" 00:27:57.121 } 00:27:57.121 ] 00:27:57.121 }, 00:27:57.121 { 00:27:57.121 "name": "nvmf_tgt_poll_group_001", 00:27:57.121 "admin_qpairs": 0, 00:27:57.121 "io_qpairs": 2, 00:27:57.121 "current_admin_qpairs": 0, 00:27:57.121 "current_io_qpairs": 2, 00:27:57.121 "pending_bdev_io": 0, 00:27:57.121 "completed_nvme_io": 28289, 00:27:57.121 "transports": [ 00:27:57.121 { 00:27:57.121 "trtype": "TCP" 00:27:57.121 } 00:27:57.121 ] 00:27:57.121 }, 00:27:57.121 { 00:27:57.121 "name": "nvmf_tgt_poll_group_002", 00:27:57.121 "admin_qpairs": 0, 00:27:57.121 "io_qpairs": 0, 00:27:57.121 "current_admin_qpairs": 0, 00:27:57.121 "current_io_qpairs": 0, 00:27:57.121 "pending_bdev_io": 0, 00:27:57.121 "completed_nvme_io": 0, 00:27:57.121 "transports": [ 00:27:57.121 { 00:27:57.121 "trtype": "TCP" 00:27:57.121 } 00:27:57.121 ] 00:27:57.121 }, 00:27:57.121 { 00:27:57.121 "name": "nvmf_tgt_poll_group_003", 00:27:57.121 "admin_qpairs": 0, 00:27:57.121 "io_qpairs": 0, 00:27:57.121 "current_admin_qpairs": 0, 00:27:57.121 "current_io_qpairs": 0, 00:27:57.121 "pending_bdev_io": 0, 00:27:57.121 "completed_nvme_io": 0, 00:27:57.121 "transports": [ 00:27:57.121 { 00:27:57.121 "trtype": "TCP" 00:27:57.121 } 00:27:57.121 ] 00:27:57.121 } 00:27:57.121 ] 00:27:57.121 }' 00:27:57.379 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:57.379 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:27:57.379 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:27:57.379 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:27:57.379 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1353631 00:28:05.491 Initializing NVMe Controllers 00:28:05.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:05.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:05.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:05.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:05.491 Initialization complete. Launching workers. 00:28:05.491 ======================================================== 00:28:05.491 Latency(us) 00:28:05.491 Device Information : IOPS MiB/s Average min max 00:28:05.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7096.90 27.72 9050.08 1374.37 54082.80 00:28:05.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7630.50 29.81 8412.72 1614.17 53190.25 00:28:05.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7561.70 29.54 8464.22 1362.34 53496.79 00:28:05.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7486.60 29.24 8575.83 1118.71 53849.02 00:28:05.491 ======================================================== 00:28:05.492 Total : 29775.70 116.31 8618.72 1118.71 54082.80 00:28:05.492 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.492 rmmod nvme_tcp 00:28:05.492 rmmod nvme_fabrics 00:28:05.492 rmmod nvme_keyring 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 1353595 ']' 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 1353595 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1353595 ']' 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1353595 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1353595 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1353595' 00:28:05.492 killing process with pid 1353595 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1353595 00:28:05.492 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1353595 00:28:05.755 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:05.755 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:05.755 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:05.755 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:05.755 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:28:05.755 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:05.755 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:28:05.755 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.755 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.755 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.755 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.755 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:09.045 00:28:09.045 real 0m49.629s 00:28:09.045 user 2m43.434s 00:28:09.045 sys 0m10.448s 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.045 ************************************ 00:28:09.045 END TEST nvmf_perf_adq 00:28:09.045 ************************************ 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:09.045 ************************************ 00:28:09.045 START TEST nvmf_shutdown 00:28:09.045 ************************************ 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:09.045 * Looking for test storage... 00:28:09.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.045 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.046 --rc genhtml_branch_coverage=1 00:28:09.046 --rc genhtml_function_coverage=1 00:28:09.046 --rc genhtml_legend=1 00:28:09.046 --rc geninfo_all_blocks=1 00:28:09.046 --rc geninfo_unexecuted_blocks=1 00:28:09.046 00:28:09.046 ' 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:09.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.046 --rc genhtml_branch_coverage=1 00:28:09.046 --rc genhtml_function_coverage=1 00:28:09.046 --rc genhtml_legend=1 00:28:09.046 --rc geninfo_all_blocks=1 00:28:09.046 --rc geninfo_unexecuted_blocks=1 00:28:09.046 00:28:09.046 ' 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:09.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.046 --rc genhtml_branch_coverage=1 00:28:09.046 --rc genhtml_function_coverage=1 00:28:09.046 --rc genhtml_legend=1 00:28:09.046 --rc geninfo_all_blocks=1 00:28:09.046 --rc geninfo_unexecuted_blocks=1 00:28:09.046 00:28:09.046 ' 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:09.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.046 --rc genhtml_branch_coverage=1 00:28:09.046 --rc genhtml_function_coverage=1 00:28:09.046 --rc genhtml_legend=1 00:28:09.046 --rc geninfo_all_blocks=1 00:28:09.046 --rc geninfo_unexecuted_blocks=1 00:28:09.046 00:28:09.046 ' 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:09.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@169 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:09.046 ************************************ 00:28:09.046 START TEST nvmf_shutdown_tc1 00:28:09.046 ************************************ 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:09.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:09.047 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:09.047 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.047 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.047 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.047 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:09.047 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:09.047 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:09.047 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:15.619 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:15.620 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:15.620 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:15.620 Found net devices under 0000:86:00.0: cvl_0_0 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:15.620 Found net devices under 0000:86:00.1: cvl_0_1 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:15.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:28:15.620 00:28:15.620 --- 10.0.0.2 ping statistics --- 00:28:15.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.620 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:28:15.620 00:28:15.620 --- 10.0.0.1 ping statistics --- 00:28:15.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.620 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:15.620 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=1359035 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 1359035 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1359035 ']' 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.621 [2024-12-07 10:02:43.443993] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:15.621 [2024-12-07 10:02:43.444040] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.621 [2024-12-07 10:02:43.503402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:15.621 [2024-12-07 10:02:43.545515] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.621 [2024-12-07 10:02:43.545553] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.621 [2024-12-07 10:02:43.545560] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.621 [2024-12-07 10:02:43.545566] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.621 [2024-12-07 10:02:43.545571] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.621 [2024-12-07 10:02:43.545681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.621 [2024-12-07 10:02:43.545708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.621 [2024-12-07 10:02:43.545797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.621 [2024-12-07 10:02:43.545798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.621 [2024-12-07 10:02:43.687107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.621 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.621 Malloc1 00:28:15.621 [2024-12-07 10:02:43.786676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.621 Malloc2 00:28:15.621 Malloc3 00:28:15.621 Malloc4 00:28:15.621 Malloc5 00:28:15.621 Malloc6 00:28:15.621 Malloc7 00:28:15.621 Malloc8 00:28:15.621 Malloc9 00:28:15.621 Malloc10 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1359131 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1359131 /var/tmp/bdevperf.sock 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1359131 ']' 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:15.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:15.621 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:15.621 { 00:28:15.621 "params": { 00:28:15.621 "name": "Nvme$subsystem", 00:28:15.621 "trtype": "$TEST_TRANSPORT", 00:28:15.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.621 "adrfam": "ipv4", 00:28:15.621 "trsvcid": "$NVMF_PORT", 00:28:15.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.621 "hdgst": ${hdgst:-false}, 00:28:15.621 "ddgst": ${ddgst:-false} 00:28:15.621 }, 00:28:15.622 "method": "bdev_nvme_attach_controller" 00:28:15.622 } 00:28:15.622 EOF 00:28:15.622 )") 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:15.622 { 00:28:15.622 "params": { 00:28:15.622 "name": "Nvme$subsystem", 00:28:15.622 "trtype": "$TEST_TRANSPORT", 00:28:15.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.622 "adrfam": "ipv4", 00:28:15.622 "trsvcid": "$NVMF_PORT", 00:28:15.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.622 "hdgst": ${hdgst:-false}, 00:28:15.622 "ddgst": ${ddgst:-false} 00:28:15.622 }, 00:28:15.622 "method": "bdev_nvme_attach_controller" 00:28:15.622 } 00:28:15.622 EOF 00:28:15.622 )") 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:15.622 { 00:28:15.622 "params": { 00:28:15.622 "name": "Nvme$subsystem", 00:28:15.622 "trtype": "$TEST_TRANSPORT", 00:28:15.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.622 "adrfam": "ipv4", 00:28:15.622 "trsvcid": "$NVMF_PORT", 00:28:15.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.622 "hdgst": ${hdgst:-false}, 00:28:15.622 "ddgst": ${ddgst:-false} 00:28:15.622 }, 00:28:15.622 "method": "bdev_nvme_attach_controller" 00:28:15.622 } 00:28:15.622 EOF 00:28:15.622 )") 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:15.622 { 00:28:15.622 "params": { 00:28:15.622 "name": "Nvme$subsystem", 00:28:15.622 "trtype": "$TEST_TRANSPORT", 00:28:15.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.622 "adrfam": "ipv4", 00:28:15.622 "trsvcid": "$NVMF_PORT", 00:28:15.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.622 "hdgst": ${hdgst:-false}, 00:28:15.622 "ddgst": ${ddgst:-false} 00:28:15.622 }, 00:28:15.622 "method": "bdev_nvme_attach_controller" 00:28:15.622 } 00:28:15.622 EOF 00:28:15.622 )") 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:15.622 { 00:28:15.622 "params": { 00:28:15.622 "name": "Nvme$subsystem", 00:28:15.622 "trtype": "$TEST_TRANSPORT", 00:28:15.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.622 "adrfam": "ipv4", 00:28:15.622 "trsvcid": "$NVMF_PORT", 00:28:15.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.622 "hdgst": ${hdgst:-false}, 00:28:15.622 "ddgst": ${ddgst:-false} 00:28:15.622 }, 00:28:15.622 "method": "bdev_nvme_attach_controller" 00:28:15.622 } 00:28:15.622 EOF 00:28:15.622 )") 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:15.622 { 00:28:15.622 "params": { 00:28:15.622 "name": "Nvme$subsystem", 00:28:15.622 "trtype": "$TEST_TRANSPORT", 00:28:15.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.622 "adrfam": "ipv4", 00:28:15.622 "trsvcid": "$NVMF_PORT", 00:28:15.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.622 "hdgst": ${hdgst:-false}, 00:28:15.622 "ddgst": ${ddgst:-false} 00:28:15.622 }, 00:28:15.622 "method": "bdev_nvme_attach_controller" 00:28:15.622 } 00:28:15.622 EOF 00:28:15.622 )") 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:15.622 { 00:28:15.622 "params": { 00:28:15.622 "name": "Nvme$subsystem", 00:28:15.622 "trtype": "$TEST_TRANSPORT", 00:28:15.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.622 "adrfam": "ipv4", 00:28:15.622 "trsvcid": "$NVMF_PORT", 00:28:15.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.622 "hdgst": ${hdgst:-false}, 00:28:15.622 "ddgst": ${ddgst:-false} 00:28:15.622 }, 00:28:15.622 "method": "bdev_nvme_attach_controller" 00:28:15.622 } 00:28:15.622 EOF 00:28:15.622 )") 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:15.622 [2024-12-07 10:02:44.260124] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:15.622 [2024-12-07 10:02:44.260172] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:15.622 { 00:28:15.622 "params": { 00:28:15.622 "name": "Nvme$subsystem", 00:28:15.622 "trtype": "$TEST_TRANSPORT", 00:28:15.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.622 "adrfam": "ipv4", 00:28:15.622 "trsvcid": "$NVMF_PORT", 00:28:15.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.622 "hdgst": ${hdgst:-false}, 00:28:15.622 "ddgst": ${ddgst:-false} 00:28:15.622 }, 00:28:15.622 "method": "bdev_nvme_attach_controller" 00:28:15.622 } 00:28:15.622 EOF 00:28:15.622 )") 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:15.622 { 00:28:15.622 "params": { 00:28:15.622 "name": "Nvme$subsystem", 00:28:15.622 "trtype": "$TEST_TRANSPORT", 00:28:15.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.622 "adrfam": "ipv4", 00:28:15.622 "trsvcid": "$NVMF_PORT", 00:28:15.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.622 "hdgst": ${hdgst:-false}, 00:28:15.622 "ddgst": ${ddgst:-false} 00:28:15.622 }, 00:28:15.622 "method": "bdev_nvme_attach_controller" 00:28:15.622 } 00:28:15.622 EOF 00:28:15.622 )") 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:15.622 { 00:28:15.622 "params": { 00:28:15.622 "name": "Nvme$subsystem", 00:28:15.622 "trtype": "$TEST_TRANSPORT", 00:28:15.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.622 "adrfam": "ipv4", 00:28:15.622 "trsvcid": "$NVMF_PORT", 00:28:15.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.622 "hdgst": ${hdgst:-false}, 00:28:15.622 "ddgst": ${ddgst:-false} 00:28:15.622 }, 00:28:15.622 "method": "bdev_nvme_attach_controller" 00:28:15.622 } 00:28:15.622 EOF 00:28:15.622 )") 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:28:15.622 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:15.622 "params": { 00:28:15.622 "name": "Nvme1", 00:28:15.622 "trtype": "tcp", 00:28:15.622 "traddr": "10.0.0.2", 00:28:15.622 "adrfam": "ipv4", 00:28:15.622 "trsvcid": "4420", 00:28:15.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:15.622 "hdgst": false, 00:28:15.622 "ddgst": false 00:28:15.622 }, 00:28:15.622 "method": "bdev_nvme_attach_controller" 00:28:15.622 },{ 00:28:15.622 "params": { 00:28:15.622 "name": "Nvme2", 00:28:15.622 "trtype": "tcp", 00:28:15.622 "traddr": "10.0.0.2", 00:28:15.622 "adrfam": "ipv4", 00:28:15.622 "trsvcid": "4420", 00:28:15.622 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:15.623 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:15.623 "hdgst": false, 00:28:15.623 "ddgst": false 00:28:15.623 }, 00:28:15.623 "method": "bdev_nvme_attach_controller" 00:28:15.623 },{ 00:28:15.623 "params": { 00:28:15.623 "name": "Nvme3", 00:28:15.623 "trtype": "tcp", 00:28:15.623 "traddr": "10.0.0.2", 00:28:15.623 "adrfam": "ipv4", 00:28:15.623 "trsvcid": "4420", 00:28:15.623 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:15.623 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:15.623 "hdgst": false, 00:28:15.623 "ddgst": false 00:28:15.623 }, 00:28:15.623 "method": "bdev_nvme_attach_controller" 00:28:15.623 },{ 00:28:15.623 "params": { 00:28:15.623 "name": "Nvme4", 00:28:15.623 "trtype": "tcp", 00:28:15.623 "traddr": "10.0.0.2", 00:28:15.623 "adrfam": "ipv4", 00:28:15.623 "trsvcid": "4420", 00:28:15.623 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:15.623 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:15.623 "hdgst": false, 00:28:15.623 "ddgst": false 00:28:15.623 }, 00:28:15.623 "method": "bdev_nvme_attach_controller" 00:28:15.623 },{ 00:28:15.623 "params": { 00:28:15.623 "name": "Nvme5", 00:28:15.623 "trtype": "tcp", 00:28:15.623 "traddr": "10.0.0.2", 00:28:15.623 "adrfam": "ipv4", 00:28:15.623 "trsvcid": "4420", 00:28:15.623 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:15.623 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:15.623 "hdgst": false, 00:28:15.623 "ddgst": false 00:28:15.623 }, 00:28:15.623 "method": "bdev_nvme_attach_controller" 00:28:15.623 },{ 00:28:15.623 "params": { 00:28:15.623 "name": "Nvme6", 00:28:15.623 "trtype": "tcp", 00:28:15.623 "traddr": "10.0.0.2", 00:28:15.623 "adrfam": "ipv4", 00:28:15.623 "trsvcid": "4420", 00:28:15.623 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:15.623 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:15.623 "hdgst": false, 00:28:15.623 "ddgst": false 00:28:15.623 }, 00:28:15.623 "method": "bdev_nvme_attach_controller" 00:28:15.623 },{ 00:28:15.623 "params": { 00:28:15.623 "name": "Nvme7", 00:28:15.623 "trtype": "tcp", 00:28:15.623 "traddr": "10.0.0.2", 00:28:15.623 "adrfam": "ipv4", 00:28:15.623 "trsvcid": "4420", 00:28:15.623 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:15.623 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:15.623 "hdgst": false, 00:28:15.623 "ddgst": false 00:28:15.623 }, 00:28:15.623 "method": "bdev_nvme_attach_controller" 00:28:15.623 },{ 00:28:15.623 "params": { 00:28:15.623 "name": "Nvme8", 00:28:15.623 "trtype": "tcp", 00:28:15.623 "traddr": "10.0.0.2", 00:28:15.623 "adrfam": "ipv4", 00:28:15.623 "trsvcid": "4420", 00:28:15.623 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:15.623 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:15.623 "hdgst": false, 00:28:15.623 "ddgst": false 00:28:15.623 }, 00:28:15.623 "method": "bdev_nvme_attach_controller" 00:28:15.623 },{ 00:28:15.623 "params": { 00:28:15.623 "name": "Nvme9", 00:28:15.623 "trtype": "tcp", 00:28:15.623 "traddr": "10.0.0.2", 00:28:15.623 "adrfam": "ipv4", 00:28:15.623 "trsvcid": "4420", 00:28:15.623 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:15.623 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:15.623 "hdgst": false, 00:28:15.623 "ddgst": false 00:28:15.623 }, 00:28:15.623 "method": "bdev_nvme_attach_controller" 00:28:15.623 },{ 00:28:15.623 "params": { 00:28:15.623 "name": "Nvme10", 00:28:15.623 "trtype": "tcp", 00:28:15.623 "traddr": "10.0.0.2", 00:28:15.623 "adrfam": "ipv4", 00:28:15.623 "trsvcid": "4420", 00:28:15.623 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:15.623 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:15.623 "hdgst": false, 00:28:15.623 "ddgst": false 00:28:15.623 }, 00:28:15.623 "method": "bdev_nvme_attach_controller" 00:28:15.623 }' 00:28:15.623 [2024-12-07 10:02:44.315751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.882 [2024-12-07 10:02:44.356323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.802 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:17.802 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:17.802 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:17.802 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.802 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.802 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.802 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1359131 00:28:17.802 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:17.802 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:18.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1359131 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1359035 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:18.738 { 00:28:18.738 "params": { 00:28:18.738 "name": "Nvme$subsystem", 00:28:18.738 "trtype": "$TEST_TRANSPORT", 00:28:18.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.738 "adrfam": "ipv4", 00:28:18.738 "trsvcid": "$NVMF_PORT", 00:28:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.738 "hdgst": ${hdgst:-false}, 00:28:18.738 "ddgst": ${ddgst:-false} 00:28:18.738 }, 00:28:18.738 "method": "bdev_nvme_attach_controller" 00:28:18.738 } 00:28:18.738 EOF 00:28:18.738 )") 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:18.738 { 00:28:18.738 "params": { 00:28:18.738 "name": "Nvme$subsystem", 00:28:18.738 "trtype": "$TEST_TRANSPORT", 00:28:18.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.738 "adrfam": "ipv4", 00:28:18.738 "trsvcid": "$NVMF_PORT", 00:28:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.738 "hdgst": ${hdgst:-false}, 00:28:18.738 "ddgst": ${ddgst:-false} 00:28:18.738 }, 00:28:18.738 "method": "bdev_nvme_attach_controller" 00:28:18.738 } 00:28:18.738 EOF 00:28:18.738 )") 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:18.738 { 00:28:18.738 "params": { 00:28:18.738 "name": "Nvme$subsystem", 00:28:18.738 "trtype": "$TEST_TRANSPORT", 00:28:18.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.738 "adrfam": "ipv4", 00:28:18.738 "trsvcid": "$NVMF_PORT", 00:28:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.738 "hdgst": ${hdgst:-false}, 00:28:18.738 "ddgst": ${ddgst:-false} 00:28:18.738 }, 00:28:18.738 "method": "bdev_nvme_attach_controller" 00:28:18.738 } 00:28:18.738 EOF 00:28:18.738 )") 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:18.738 { 00:28:18.738 "params": { 00:28:18.738 "name": "Nvme$subsystem", 00:28:18.738 "trtype": "$TEST_TRANSPORT", 00:28:18.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.738 "adrfam": "ipv4", 00:28:18.738 "trsvcid": "$NVMF_PORT", 00:28:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.738 "hdgst": ${hdgst:-false}, 00:28:18.738 "ddgst": ${ddgst:-false} 00:28:18.738 }, 00:28:18.738 "method": "bdev_nvme_attach_controller" 00:28:18.738 } 00:28:18.738 EOF 00:28:18.738 )") 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:18.738 { 00:28:18.738 "params": { 00:28:18.738 "name": "Nvme$subsystem", 00:28:18.738 "trtype": "$TEST_TRANSPORT", 00:28:18.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.738 "adrfam": "ipv4", 00:28:18.738 "trsvcid": "$NVMF_PORT", 00:28:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.738 "hdgst": ${hdgst:-false}, 00:28:18.738 "ddgst": ${ddgst:-false} 00:28:18.738 }, 00:28:18.738 "method": "bdev_nvme_attach_controller" 00:28:18.738 } 00:28:18.738 EOF 00:28:18.738 )") 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:18.738 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:18.738 { 00:28:18.738 "params": { 00:28:18.738 "name": "Nvme$subsystem", 00:28:18.738 "trtype": "$TEST_TRANSPORT", 00:28:18.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.738 "adrfam": "ipv4", 00:28:18.738 "trsvcid": "$NVMF_PORT", 00:28:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.738 "hdgst": ${hdgst:-false}, 00:28:18.738 "ddgst": ${ddgst:-false} 00:28:18.738 }, 00:28:18.738 "method": "bdev_nvme_attach_controller" 00:28:18.738 } 00:28:18.738 EOF 00:28:18.739 )") 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:18.739 { 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme$subsystem", 00:28:18.739 "trtype": "$TEST_TRANSPORT", 00:28:18.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "$NVMF_PORT", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.739 "hdgst": ${hdgst:-false}, 00:28:18.739 "ddgst": ${ddgst:-false} 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 } 00:28:18.739 EOF 00:28:18.739 )") 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:18.739 [2024-12-07 10:02:47.192986] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:18.739 [2024-12-07 10:02:47.193036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1359617 ] 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:18.739 { 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme$subsystem", 00:28:18.739 "trtype": "$TEST_TRANSPORT", 00:28:18.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "$NVMF_PORT", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.739 "hdgst": ${hdgst:-false}, 00:28:18.739 "ddgst": ${ddgst:-false} 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 } 00:28:18.739 EOF 00:28:18.739 )") 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:18.739 { 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme$subsystem", 00:28:18.739 "trtype": "$TEST_TRANSPORT", 00:28:18.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "$NVMF_PORT", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.739 "hdgst": ${hdgst:-false}, 00:28:18.739 "ddgst": ${ddgst:-false} 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 } 00:28:18.739 EOF 00:28:18.739 )") 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:18.739 { 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme$subsystem", 00:28:18.739 "trtype": "$TEST_TRANSPORT", 00:28:18.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "$NVMF_PORT", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.739 "hdgst": ${hdgst:-false}, 00:28:18.739 "ddgst": ${ddgst:-false} 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 } 00:28:18.739 EOF 00:28:18.739 )") 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:28:18.739 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme1", 00:28:18.739 "trtype": "tcp", 00:28:18.739 "traddr": "10.0.0.2", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "4420", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:18.739 "hdgst": false, 00:28:18.739 "ddgst": false 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 },{ 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme2", 00:28:18.739 "trtype": "tcp", 00:28:18.739 "traddr": "10.0.0.2", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "4420", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:18.739 "hdgst": false, 00:28:18.739 "ddgst": false 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 },{ 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme3", 00:28:18.739 "trtype": "tcp", 00:28:18.739 "traddr": "10.0.0.2", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "4420", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:18.739 "hdgst": false, 00:28:18.739 "ddgst": false 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 },{ 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme4", 00:28:18.739 "trtype": "tcp", 00:28:18.739 "traddr": "10.0.0.2", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "4420", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:18.739 "hdgst": false, 00:28:18.739 "ddgst": false 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 },{ 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme5", 00:28:18.739 "trtype": "tcp", 00:28:18.739 "traddr": "10.0.0.2", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "4420", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:18.739 "hdgst": false, 00:28:18.739 "ddgst": false 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 },{ 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme6", 00:28:18.739 "trtype": "tcp", 00:28:18.739 "traddr": "10.0.0.2", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "4420", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:18.739 "hdgst": false, 00:28:18.739 "ddgst": false 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 },{ 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme7", 00:28:18.739 "trtype": "tcp", 00:28:18.739 "traddr": "10.0.0.2", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "4420", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:18.739 "hdgst": false, 00:28:18.739 "ddgst": false 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 },{ 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme8", 00:28:18.739 "trtype": "tcp", 00:28:18.739 "traddr": "10.0.0.2", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "4420", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:18.739 "hdgst": false, 00:28:18.739 "ddgst": false 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 },{ 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme9", 00:28:18.739 "trtype": "tcp", 00:28:18.739 "traddr": "10.0.0.2", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "4420", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:18.739 "hdgst": false, 00:28:18.739 "ddgst": false 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 },{ 00:28:18.739 "params": { 00:28:18.739 "name": "Nvme10", 00:28:18.739 "trtype": "tcp", 00:28:18.739 "traddr": "10.0.0.2", 00:28:18.739 "adrfam": "ipv4", 00:28:18.739 "trsvcid": "4420", 00:28:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:18.739 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:18.739 "hdgst": false, 00:28:18.739 "ddgst": false 00:28:18.739 }, 00:28:18.739 "method": "bdev_nvme_attach_controller" 00:28:18.739 }' 00:28:18.739 [2024-12-07 10:02:47.251353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.739 [2024-12-07 10:02:47.292084] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.118 Running I/O for 1 seconds... 00:28:21.495 2184.00 IOPS, 136.50 MiB/s 00:28:21.495 Latency(us) 00:28:21.495 [2024-12-07T09:02:50.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.495 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.495 Verification LBA range: start 0x0 length 0x400 00:28:21.495 Nvme1n1 : 1.08 237.21 14.83 0.00 0.00 267434.07 18578.03 227039.50 00:28:21.495 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.495 Verification LBA range: start 0x0 length 0x400 00:28:21.495 Nvme2n1 : 1.08 236.36 14.77 0.00 0.00 264429.97 17552.25 221568.67 00:28:21.495 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.495 Verification LBA range: start 0x0 length 0x400 00:28:21.495 Nvme3n1 : 1.13 287.39 17.96 0.00 0.00 210575.29 14702.86 212450.62 00:28:21.495 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.495 Verification LBA range: start 0x0 length 0x400 00:28:21.495 Nvme4n1 : 1.14 280.48 17.53 0.00 0.00 216616.69 13620.09 225215.89 00:28:21.495 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.495 Verification LBA range: start 0x0 length 0x400 00:28:21.495 Nvme5n1 : 1.15 278.17 17.39 0.00 0.00 214309.67 13278.16 217921.45 00:28:21.495 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.495 Verification LBA range: start 0x0 length 0x400 00:28:21.495 Nvme6n1 : 1.14 281.75 17.61 0.00 0.00 209262.86 22795.13 210627.01 00:28:21.495 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.495 Verification LBA range: start 0x0 length 0x400 00:28:21.495 Nvme7n1 : 1.15 282.94 17.68 0.00 0.00 205149.49 2535.96 216097.84 00:28:21.495 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.495 Verification LBA range: start 0x0 length 0x400 00:28:21.495 Nvme8n1 : 1.15 277.39 17.34 0.00 0.00 206397.08 14075.99 237069.36 00:28:21.495 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.495 Verification LBA range: start 0x0 length 0x400 00:28:21.495 Nvme9n1 : 1.16 276.12 17.26 0.00 0.00 204293.88 18464.06 230686.72 00:28:21.495 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.495 Verification LBA range: start 0x0 length 0x400 00:28:21.495 Nvme10n1 : 1.16 275.61 17.23 0.00 0.00 201570.66 14474.91 251658.24 00:28:21.495 [2024-12-07T09:02:50.221Z] =================================================================================================================== 00:28:21.495 [2024-12-07T09:02:50.221Z] Total : 2713.42 169.59 0.00 0.00 218063.73 2535.96 251658.24 00:28:21.495 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:21.495 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:21.495 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:21.495 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:21.495 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:21.495 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:21.495 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:21.495 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:21.495 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:21.495 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:21.495 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:21.495 rmmod nvme_tcp 00:28:21.495 rmmod nvme_fabrics 00:28:21.754 rmmod nvme_keyring 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 1359035 ']' 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 1359035 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1359035 ']' 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1359035 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1359035 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1359035' 00:28:21.754 killing process with pid 1359035 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1359035 00:28:21.754 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1359035 00:28:22.014 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:22.014 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:22.014 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:22.014 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:22.014 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:28:22.014 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:28:22.014 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:22.014 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:22.014 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:22.014 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.014 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.014 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:24.548 00:28:24.548 real 0m15.023s 00:28:24.548 user 0m34.311s 00:28:24.548 sys 0m5.598s 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:24.548 ************************************ 00:28:24.548 END TEST nvmf_shutdown_tc1 00:28:24.548 ************************************ 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:24.548 ************************************ 00:28:24.548 START TEST nvmf_shutdown_tc2 00:28:24.548 ************************************ 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.548 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:24.549 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:24.549 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:24.549 Found net devices under 0000:86:00.0: cvl_0_0 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:24.549 Found net devices under 0000:86:00.1: cvl_0_1 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.549 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.550 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.550 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:28:24.550 00:28:24.550 --- 10.0.0.2 ping statistics --- 00:28:24.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.550 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:28:24.550 00:28:24.550 --- 10.0.0.1 ping statistics --- 00:28:24.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.550 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=1360675 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 1360675 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1360675 ']' 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.550 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.550 [2024-12-07 10:02:53.199148] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:24.550 [2024-12-07 10:02:53.199193] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.550 [2024-12-07 10:02:53.257804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:24.810 [2024-12-07 10:02:53.299365] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.810 [2024-12-07 10:02:53.299409] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.810 [2024-12-07 10:02:53.299416] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.810 [2024-12-07 10:02:53.299422] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.810 [2024-12-07 10:02:53.299427] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.810 [2024-12-07 10:02:53.299543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:24.810 [2024-12-07 10:02:53.299571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:24.810 [2024-12-07 10:02:53.299662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.810 [2024-12-07 10:02:53.299663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:28:24.810 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.810 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:24.810 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:24.810 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.810 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.810 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:24.810 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.810 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 [2024-12-07 10:02:53.454135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.811 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.811 Malloc1 00:28:25.070 [2024-12-07 10:02:53.552764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.070 Malloc2 00:28:25.070 Malloc3 00:28:25.070 Malloc4 00:28:25.070 Malloc5 00:28:25.070 Malloc6 00:28:25.070 Malloc7 00:28:25.331 Malloc8 00:28:25.331 Malloc9 00:28:25.331 Malloc10 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1360915 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1360915 /var/tmp/bdevperf.sock 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1360915 ']' 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:25.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.331 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.331 { 00:28:25.331 "params": { 00:28:25.331 "name": "Nvme$subsystem", 00:28:25.331 "trtype": "$TEST_TRANSPORT", 00:28:25.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.331 "adrfam": "ipv4", 00:28:25.331 "trsvcid": "$NVMF_PORT", 00:28:25.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.331 "hdgst": ${hdgst:-false}, 00:28:25.331 "ddgst": ${ddgst:-false} 00:28:25.331 }, 00:28:25.331 "method": "bdev_nvme_attach_controller" 00:28:25.331 } 00:28:25.331 EOF 00:28:25.331 )") 00:28:25.332 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:25.332 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.332 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.332 { 00:28:25.332 "params": { 00:28:25.332 "name": "Nvme$subsystem", 00:28:25.332 "trtype": "$TEST_TRANSPORT", 00:28:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.332 "adrfam": "ipv4", 00:28:25.332 "trsvcid": "$NVMF_PORT", 00:28:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.332 "hdgst": ${hdgst:-false}, 00:28:25.332 "ddgst": ${ddgst:-false} 00:28:25.332 }, 00:28:25.332 "method": "bdev_nvme_attach_controller" 00:28:25.332 } 00:28:25.332 EOF 00:28:25.332 )") 00:28:25.332 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:25.332 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.332 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.332 { 00:28:25.332 "params": { 00:28:25.332 "name": "Nvme$subsystem", 00:28:25.332 "trtype": "$TEST_TRANSPORT", 00:28:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.332 "adrfam": "ipv4", 00:28:25.332 "trsvcid": "$NVMF_PORT", 00:28:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.332 "hdgst": ${hdgst:-false}, 00:28:25.332 "ddgst": ${ddgst:-false} 00:28:25.332 }, 00:28:25.332 "method": "bdev_nvme_attach_controller" 00:28:25.332 } 00:28:25.332 EOF 00:28:25.332 )") 00:28:25.332 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:25.332 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.332 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.332 { 00:28:25.332 "params": { 00:28:25.332 "name": "Nvme$subsystem", 00:28:25.332 "trtype": "$TEST_TRANSPORT", 00:28:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.332 "adrfam": "ipv4", 00:28:25.332 "trsvcid": "$NVMF_PORT", 00:28:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.332 "hdgst": ${hdgst:-false}, 00:28:25.332 "ddgst": ${ddgst:-false} 00:28:25.332 }, 00:28:25.332 "method": "bdev_nvme_attach_controller" 00:28:25.332 } 00:28:25.332 EOF 00:28:25.332 )") 00:28:25.332 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.332 { 00:28:25.332 "params": { 00:28:25.332 "name": "Nvme$subsystem", 00:28:25.332 "trtype": "$TEST_TRANSPORT", 00:28:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.332 "adrfam": "ipv4", 00:28:25.332 "trsvcid": "$NVMF_PORT", 00:28:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.332 "hdgst": ${hdgst:-false}, 00:28:25.332 "ddgst": ${ddgst:-false} 00:28:25.332 }, 00:28:25.332 "method": "bdev_nvme_attach_controller" 00:28:25.332 } 00:28:25.332 EOF 00:28:25.332 )") 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.332 { 00:28:25.332 "params": { 00:28:25.332 "name": "Nvme$subsystem", 00:28:25.332 "trtype": "$TEST_TRANSPORT", 00:28:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.332 "adrfam": "ipv4", 00:28:25.332 "trsvcid": "$NVMF_PORT", 00:28:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.332 "hdgst": ${hdgst:-false}, 00:28:25.332 "ddgst": ${ddgst:-false} 00:28:25.332 }, 00:28:25.332 "method": "bdev_nvme_attach_controller" 00:28:25.332 } 00:28:25.332 EOF 00:28:25.332 )") 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.332 [2024-12-07 10:02:54.019236] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:25.332 [2024-12-07 10:02:54.019286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360915 ] 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.332 { 00:28:25.332 "params": { 00:28:25.332 "name": "Nvme$subsystem", 00:28:25.332 "trtype": "$TEST_TRANSPORT", 00:28:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.332 "adrfam": "ipv4", 00:28:25.332 "trsvcid": "$NVMF_PORT", 00:28:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.332 "hdgst": ${hdgst:-false}, 00:28:25.332 "ddgst": ${ddgst:-false} 00:28:25.332 }, 00:28:25.332 "method": "bdev_nvme_attach_controller" 00:28:25.332 } 00:28:25.332 EOF 00:28:25.332 )") 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.332 { 00:28:25.332 "params": { 00:28:25.332 "name": "Nvme$subsystem", 00:28:25.332 "trtype": "$TEST_TRANSPORT", 00:28:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.332 "adrfam": "ipv4", 00:28:25.332 "trsvcid": "$NVMF_PORT", 00:28:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.332 "hdgst": ${hdgst:-false}, 00:28:25.332 "ddgst": ${ddgst:-false} 00:28:25.332 }, 00:28:25.332 "method": "bdev_nvme_attach_controller" 00:28:25.332 } 00:28:25.332 EOF 00:28:25.332 )") 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.332 { 00:28:25.332 "params": { 00:28:25.332 "name": "Nvme$subsystem", 00:28:25.332 "trtype": "$TEST_TRANSPORT", 00:28:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.332 "adrfam": "ipv4", 00:28:25.332 "trsvcid": "$NVMF_PORT", 00:28:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.332 "hdgst": ${hdgst:-false}, 00:28:25.332 "ddgst": ${ddgst:-false} 00:28:25.332 }, 00:28:25.332 "method": "bdev_nvme_attach_controller" 00:28:25.332 } 00:28:25.332 EOF 00:28:25.332 )") 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.332 { 00:28:25.332 "params": { 00:28:25.332 "name": "Nvme$subsystem", 00:28:25.332 "trtype": "$TEST_TRANSPORT", 00:28:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.332 "adrfam": "ipv4", 00:28:25.332 "trsvcid": "$NVMF_PORT", 00:28:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.332 "hdgst": ${hdgst:-false}, 00:28:25.332 "ddgst": ${ddgst:-false} 00:28:25.332 }, 00:28:25.332 "method": "bdev_nvme_attach_controller" 00:28:25.332 } 00:28:25.332 EOF 00:28:25.332 )") 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:28:25.332 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:25.332 "params": { 00:28:25.332 "name": "Nvme1", 00:28:25.332 "trtype": "tcp", 00:28:25.332 "traddr": "10.0.0.2", 00:28:25.332 "adrfam": "ipv4", 00:28:25.332 "trsvcid": "4420", 00:28:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:25.332 "hdgst": false, 00:28:25.332 "ddgst": false 00:28:25.332 }, 00:28:25.332 "method": "bdev_nvme_attach_controller" 00:28:25.332 },{ 00:28:25.332 "params": { 00:28:25.332 "name": "Nvme2", 00:28:25.332 "trtype": "tcp", 00:28:25.332 "traddr": "10.0.0.2", 00:28:25.332 "adrfam": "ipv4", 00:28:25.332 "trsvcid": "4420", 00:28:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:25.332 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:25.333 "hdgst": false, 00:28:25.333 "ddgst": false 00:28:25.333 }, 00:28:25.333 "method": "bdev_nvme_attach_controller" 00:28:25.333 },{ 00:28:25.333 "params": { 00:28:25.333 "name": "Nvme3", 00:28:25.333 "trtype": "tcp", 00:28:25.333 "traddr": "10.0.0.2", 00:28:25.333 "adrfam": "ipv4", 00:28:25.333 "trsvcid": "4420", 00:28:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:25.333 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:25.333 "hdgst": false, 00:28:25.333 "ddgst": false 00:28:25.333 }, 00:28:25.333 "method": "bdev_nvme_attach_controller" 00:28:25.333 },{ 00:28:25.333 "params": { 00:28:25.333 "name": "Nvme4", 00:28:25.333 "trtype": "tcp", 00:28:25.333 "traddr": "10.0.0.2", 00:28:25.333 "adrfam": "ipv4", 00:28:25.333 "trsvcid": "4420", 00:28:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:25.333 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:25.333 "hdgst": false, 00:28:25.333 "ddgst": false 00:28:25.333 }, 00:28:25.333 "method": "bdev_nvme_attach_controller" 00:28:25.333 },{ 00:28:25.333 "params": { 00:28:25.333 "name": "Nvme5", 00:28:25.333 "trtype": "tcp", 00:28:25.333 "traddr": "10.0.0.2", 00:28:25.333 "adrfam": "ipv4", 00:28:25.333 "trsvcid": "4420", 00:28:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:25.333 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:25.333 "hdgst": false, 00:28:25.333 "ddgst": false 00:28:25.333 }, 00:28:25.333 "method": "bdev_nvme_attach_controller" 00:28:25.333 },{ 00:28:25.333 "params": { 00:28:25.333 "name": "Nvme6", 00:28:25.333 "trtype": "tcp", 00:28:25.333 "traddr": "10.0.0.2", 00:28:25.333 "adrfam": "ipv4", 00:28:25.333 "trsvcid": "4420", 00:28:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:25.333 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:25.333 "hdgst": false, 00:28:25.333 "ddgst": false 00:28:25.333 }, 00:28:25.333 "method": "bdev_nvme_attach_controller" 00:28:25.333 },{ 00:28:25.333 "params": { 00:28:25.333 "name": "Nvme7", 00:28:25.333 "trtype": "tcp", 00:28:25.333 "traddr": "10.0.0.2", 00:28:25.333 "adrfam": "ipv4", 00:28:25.333 "trsvcid": "4420", 00:28:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:25.333 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:25.333 "hdgst": false, 00:28:25.333 "ddgst": false 00:28:25.333 }, 00:28:25.333 "method": "bdev_nvme_attach_controller" 00:28:25.333 },{ 00:28:25.333 "params": { 00:28:25.333 "name": "Nvme8", 00:28:25.333 "trtype": "tcp", 00:28:25.333 "traddr": "10.0.0.2", 00:28:25.333 "adrfam": "ipv4", 00:28:25.333 "trsvcid": "4420", 00:28:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:25.333 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:25.333 "hdgst": false, 00:28:25.333 "ddgst": false 00:28:25.333 }, 00:28:25.333 "method": "bdev_nvme_attach_controller" 00:28:25.333 },{ 00:28:25.333 "params": { 00:28:25.333 "name": "Nvme9", 00:28:25.333 "trtype": "tcp", 00:28:25.333 "traddr": "10.0.0.2", 00:28:25.333 "adrfam": "ipv4", 00:28:25.333 "trsvcid": "4420", 00:28:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:25.333 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:25.333 "hdgst": false, 00:28:25.333 "ddgst": false 00:28:25.333 }, 00:28:25.333 "method": "bdev_nvme_attach_controller" 00:28:25.333 },{ 00:28:25.333 "params": { 00:28:25.333 "name": "Nvme10", 00:28:25.333 "trtype": "tcp", 00:28:25.333 "traddr": "10.0.0.2", 00:28:25.333 "adrfam": "ipv4", 00:28:25.333 "trsvcid": "4420", 00:28:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:25.333 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:25.333 "hdgst": false, 00:28:25.333 "ddgst": false 00:28:25.333 }, 00:28:25.333 "method": "bdev_nvme_attach_controller" 00:28:25.333 }' 00:28:25.593 [2024-12-07 10:02:54.075027] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.593 [2024-12-07 10:02:54.114967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.973 Running I/O for 10 seconds... 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:27.232 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.492 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:27.492 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:27.492 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:27.492 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:27.492 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1360915 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1360915 ']' 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1360915 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:27.752 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1360915 00:28:27.753 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:27.753 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:27.753 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1360915' 00:28:27.753 killing process with pid 1360915 00:28:27.753 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1360915 00:28:27.753 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1360915 00:28:27.753 Received shutdown signal, test time was about 0.949047 seconds 00:28:27.753 00:28:27.753 Latency(us) 00:28:27.753 [2024-12-07T09:02:56.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.753 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.753 Verification LBA range: start 0x0 length 0x400 00:28:27.753 Nvme1n1 : 0.93 275.79 17.24 0.00 0.00 229565.22 26442.35 208803.39 00:28:27.753 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.753 Verification LBA range: start 0x0 length 0x400 00:28:27.753 Nvme2n1 : 0.94 270.96 16.94 0.00 0.00 229766.46 15956.59 225215.89 00:28:27.753 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.753 Verification LBA range: start 0x0 length 0x400 00:28:27.753 Nvme3n1 : 0.92 282.74 17.67 0.00 0.00 215385.16 3875.17 215186.03 00:28:27.753 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.753 Verification LBA range: start 0x0 length 0x400 00:28:27.753 Nvme4n1 : 0.92 276.96 17.31 0.00 0.00 216792.93 13050.21 221568.67 00:28:27.753 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.753 Verification LBA range: start 0x0 length 0x400 00:28:27.753 Nvme5n1 : 0.94 271.82 16.99 0.00 0.00 217158.79 16640.45 227951.30 00:28:27.753 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.753 Verification LBA range: start 0x0 length 0x400 00:28:27.753 Nvme6n1 : 0.93 301.87 18.87 0.00 0.00 188156.97 12993.22 211538.81 00:28:27.753 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.753 Verification LBA range: start 0x0 length 0x400 00:28:27.753 Nvme7n1 : 0.93 274.21 17.14 0.00 0.00 207132.94 17096.35 197861.73 00:28:27.753 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.753 Verification LBA range: start 0x0 length 0x400 00:28:27.753 Nvme8n1 : 0.94 273.24 17.08 0.00 0.00 203985.03 14702.86 207891.59 00:28:27.753 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.753 Verification LBA range: start 0x0 length 0x400 00:28:27.753 Nvme9n1 : 0.95 269.93 16.87 0.00 0.00 202929.42 16754.42 235245.75 00:28:27.753 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.753 Verification LBA range: start 0x0 length 0x400 00:28:27.753 Nvme10n1 : 0.91 211.23 13.20 0.00 0.00 252285.40 18122.13 249834.63 00:28:27.753 [2024-12-07T09:02:56.479Z] =================================================================================================================== 00:28:27.753 [2024-12-07T09:02:56.479Z] Total : 2708.76 169.30 0.00 0.00 215123.85 3875.17 249834.63 00:28:28.012 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1360675 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:28.951 rmmod nvme_tcp 00:28:28.951 rmmod nvme_fabrics 00:28:28.951 rmmod nvme_keyring 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 1360675 ']' 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 1360675 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1360675 ']' 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1360675 00:28:28.951 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:29.211 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:29.211 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1360675 00:28:29.211 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:29.211 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:29.211 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1360675' 00:28:29.211 killing process with pid 1360675 00:28:29.211 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1360675 00:28:29.211 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1360675 00:28:29.471 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:29.471 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:29.471 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:29.471 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:29.471 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:29.471 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:28:29.471 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:28:29.471 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:29.471 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:29.471 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.471 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.471 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:32.013 00:28:32.013 real 0m7.346s 00:28:32.013 user 0m21.583s 00:28:32.013 sys 0m1.332s 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.013 ************************************ 00:28:32.013 END TEST nvmf_shutdown_tc2 00:28:32.013 ************************************ 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@171 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:32.013 ************************************ 00:28:32.013 START TEST nvmf_shutdown_tc3 00:28:32.013 ************************************ 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:32.013 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:32.014 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:32.014 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:32.014 Found net devices under 0000:86:00.0: cvl_0_0 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:32.014 Found net devices under 0000:86:00.1: cvl_0_1 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.014 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:32.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:28:32.015 00:28:32.015 --- 10.0.0.2 ping statistics --- 00:28:32.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.015 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:28:32.015 00:28:32.015 --- 10.0.0.1 ping statistics --- 00:28:32.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.015 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=1362156 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 1362156 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1362156 ']' 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:32.015 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.015 [2024-12-07 10:03:00.590718] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:32.015 [2024-12-07 10:03:00.590764] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.015 [2024-12-07 10:03:00.649708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:32.015 [2024-12-07 10:03:00.691505] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.015 [2024-12-07 10:03:00.691545] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.015 [2024-12-07 10:03:00.691552] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.015 [2024-12-07 10:03:00.691558] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.015 [2024-12-07 10:03:00.691563] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.015 [2024-12-07 10:03:00.691677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.015 [2024-12-07 10:03:00.691704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.015 [2024-12-07 10:03:00.691799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.015 [2024-12-07 10:03:00.691800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.276 [2024-12-07 10:03:00.845155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.276 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.276 Malloc1 00:28:32.276 [2024-12-07 10:03:00.943553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.276 Malloc2 00:28:32.536 Malloc3 00:28:32.536 Malloc4 00:28:32.536 Malloc5 00:28:32.536 Malloc6 00:28:32.536 Malloc7 00:28:32.536 Malloc8 00:28:32.797 Malloc9 00:28:32.797 Malloc10 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1362319 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1362319 /var/tmp/bdevperf.sock 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1362319 ']' 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:32.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:32.797 { 00:28:32.797 "params": { 00:28:32.797 "name": "Nvme$subsystem", 00:28:32.797 "trtype": "$TEST_TRANSPORT", 00:28:32.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.797 "adrfam": "ipv4", 00:28:32.797 "trsvcid": "$NVMF_PORT", 00:28:32.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.797 "hdgst": ${hdgst:-false}, 00:28:32.797 "ddgst": ${ddgst:-false} 00:28:32.797 }, 00:28:32.797 "method": "bdev_nvme_attach_controller" 00:28:32.797 } 00:28:32.797 EOF 00:28:32.797 )") 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:32.797 { 00:28:32.797 "params": { 00:28:32.797 "name": "Nvme$subsystem", 00:28:32.797 "trtype": "$TEST_TRANSPORT", 00:28:32.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.797 "adrfam": "ipv4", 00:28:32.797 "trsvcid": "$NVMF_PORT", 00:28:32.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.797 "hdgst": ${hdgst:-false}, 00:28:32.797 "ddgst": ${ddgst:-false} 00:28:32.797 }, 00:28:32.797 "method": "bdev_nvme_attach_controller" 00:28:32.797 } 00:28:32.797 EOF 00:28:32.797 )") 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:32.797 { 00:28:32.797 "params": { 00:28:32.797 "name": "Nvme$subsystem", 00:28:32.797 "trtype": "$TEST_TRANSPORT", 00:28:32.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.797 "adrfam": "ipv4", 00:28:32.797 "trsvcid": "$NVMF_PORT", 00:28:32.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.797 "hdgst": ${hdgst:-false}, 00:28:32.797 "ddgst": ${ddgst:-false} 00:28:32.797 }, 00:28:32.797 "method": "bdev_nvme_attach_controller" 00:28:32.797 } 00:28:32.797 EOF 00:28:32.797 )") 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:32.797 { 00:28:32.797 "params": { 00:28:32.797 "name": "Nvme$subsystem", 00:28:32.797 "trtype": "$TEST_TRANSPORT", 00:28:32.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.797 "adrfam": "ipv4", 00:28:32.797 "trsvcid": "$NVMF_PORT", 00:28:32.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.797 "hdgst": ${hdgst:-false}, 00:28:32.797 "ddgst": ${ddgst:-false} 00:28:32.797 }, 00:28:32.797 "method": "bdev_nvme_attach_controller" 00:28:32.797 } 00:28:32.797 EOF 00:28:32.797 )") 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:32.797 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:32.797 { 00:28:32.797 "params": { 00:28:32.797 "name": "Nvme$subsystem", 00:28:32.797 "trtype": "$TEST_TRANSPORT", 00:28:32.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.798 "adrfam": "ipv4", 00:28:32.798 "trsvcid": "$NVMF_PORT", 00:28:32.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.798 "hdgst": ${hdgst:-false}, 00:28:32.798 "ddgst": ${ddgst:-false} 00:28:32.798 }, 00:28:32.798 "method": "bdev_nvme_attach_controller" 00:28:32.798 } 00:28:32.798 EOF 00:28:32.798 )") 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:32.798 { 00:28:32.798 "params": { 00:28:32.798 "name": "Nvme$subsystem", 00:28:32.798 "trtype": "$TEST_TRANSPORT", 00:28:32.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.798 "adrfam": "ipv4", 00:28:32.798 "trsvcid": "$NVMF_PORT", 00:28:32.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.798 "hdgst": ${hdgst:-false}, 00:28:32.798 "ddgst": ${ddgst:-false} 00:28:32.798 }, 00:28:32.798 "method": "bdev_nvme_attach_controller" 00:28:32.798 } 00:28:32.798 EOF 00:28:32.798 )") 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:32.798 { 00:28:32.798 "params": { 00:28:32.798 "name": "Nvme$subsystem", 00:28:32.798 "trtype": "$TEST_TRANSPORT", 00:28:32.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.798 "adrfam": "ipv4", 00:28:32.798 "trsvcid": "$NVMF_PORT", 00:28:32.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.798 "hdgst": ${hdgst:-false}, 00:28:32.798 "ddgst": ${ddgst:-false} 00:28:32.798 }, 00:28:32.798 "method": "bdev_nvme_attach_controller" 00:28:32.798 } 00:28:32.798 EOF 00:28:32.798 )") 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:32.798 [2024-12-07 10:03:01.420120] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:32.798 [2024-12-07 10:03:01.420171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1362319 ] 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:32.798 { 00:28:32.798 "params": { 00:28:32.798 "name": "Nvme$subsystem", 00:28:32.798 "trtype": "$TEST_TRANSPORT", 00:28:32.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.798 "adrfam": "ipv4", 00:28:32.798 "trsvcid": "$NVMF_PORT", 00:28:32.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.798 "hdgst": ${hdgst:-false}, 00:28:32.798 "ddgst": ${ddgst:-false} 00:28:32.798 }, 00:28:32.798 "method": "bdev_nvme_attach_controller" 00:28:32.798 } 00:28:32.798 EOF 00:28:32.798 )") 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:32.798 { 00:28:32.798 "params": { 00:28:32.798 "name": "Nvme$subsystem", 00:28:32.798 "trtype": "$TEST_TRANSPORT", 00:28:32.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.798 "adrfam": "ipv4", 00:28:32.798 "trsvcid": "$NVMF_PORT", 00:28:32.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.798 "hdgst": ${hdgst:-false}, 00:28:32.798 "ddgst": ${ddgst:-false} 00:28:32.798 }, 00:28:32.798 "method": "bdev_nvme_attach_controller" 00:28:32.798 } 00:28:32.798 EOF 00:28:32.798 )") 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:32.798 { 00:28:32.798 "params": { 00:28:32.798 "name": "Nvme$subsystem", 00:28:32.798 "trtype": "$TEST_TRANSPORT", 00:28:32.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.798 "adrfam": "ipv4", 00:28:32.798 "trsvcid": "$NVMF_PORT", 00:28:32.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.798 "hdgst": ${hdgst:-false}, 00:28:32.798 "ddgst": ${ddgst:-false} 00:28:32.798 }, 00:28:32.798 "method": "bdev_nvme_attach_controller" 00:28:32.798 } 00:28:32.798 EOF 00:28:32.798 )") 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:28:32.798 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:32.798 "params": { 00:28:32.798 "name": "Nvme1", 00:28:32.798 "trtype": "tcp", 00:28:32.798 "traddr": "10.0.0.2", 00:28:32.798 "adrfam": "ipv4", 00:28:32.798 "trsvcid": "4420", 00:28:32.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:32.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:32.798 "hdgst": false, 00:28:32.798 "ddgst": false 00:28:32.798 }, 00:28:32.798 "method": "bdev_nvme_attach_controller" 00:28:32.798 },{ 00:28:32.798 "params": { 00:28:32.798 "name": "Nvme2", 00:28:32.798 "trtype": "tcp", 00:28:32.798 "traddr": "10.0.0.2", 00:28:32.798 "adrfam": "ipv4", 00:28:32.798 "trsvcid": "4420", 00:28:32.798 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:32.798 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:32.798 "hdgst": false, 00:28:32.798 "ddgst": false 00:28:32.798 }, 00:28:32.798 "method": "bdev_nvme_attach_controller" 00:28:32.798 },{ 00:28:32.798 "params": { 00:28:32.798 "name": "Nvme3", 00:28:32.798 "trtype": "tcp", 00:28:32.798 "traddr": "10.0.0.2", 00:28:32.798 "adrfam": "ipv4", 00:28:32.798 "trsvcid": "4420", 00:28:32.798 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:32.798 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:32.798 "hdgst": false, 00:28:32.799 "ddgst": false 00:28:32.799 }, 00:28:32.799 "method": "bdev_nvme_attach_controller" 00:28:32.799 },{ 00:28:32.799 "params": { 00:28:32.799 "name": "Nvme4", 00:28:32.799 "trtype": "tcp", 00:28:32.799 "traddr": "10.0.0.2", 00:28:32.799 "adrfam": "ipv4", 00:28:32.799 "trsvcid": "4420", 00:28:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:32.799 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:32.799 "hdgst": false, 00:28:32.799 "ddgst": false 00:28:32.799 }, 00:28:32.799 "method": "bdev_nvme_attach_controller" 00:28:32.799 },{ 00:28:32.799 "params": { 00:28:32.799 "name": "Nvme5", 00:28:32.799 "trtype": "tcp", 00:28:32.799 "traddr": "10.0.0.2", 00:28:32.799 "adrfam": "ipv4", 00:28:32.799 "trsvcid": "4420", 00:28:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:32.799 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:32.799 "hdgst": false, 00:28:32.799 "ddgst": false 00:28:32.799 }, 00:28:32.799 "method": "bdev_nvme_attach_controller" 00:28:32.799 },{ 00:28:32.799 "params": { 00:28:32.799 "name": "Nvme6", 00:28:32.799 "trtype": "tcp", 00:28:32.799 "traddr": "10.0.0.2", 00:28:32.799 "adrfam": "ipv4", 00:28:32.799 "trsvcid": "4420", 00:28:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:32.799 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:32.799 "hdgst": false, 00:28:32.799 "ddgst": false 00:28:32.799 }, 00:28:32.799 "method": "bdev_nvme_attach_controller" 00:28:32.799 },{ 00:28:32.799 "params": { 00:28:32.799 "name": "Nvme7", 00:28:32.799 "trtype": "tcp", 00:28:32.799 "traddr": "10.0.0.2", 00:28:32.799 "adrfam": "ipv4", 00:28:32.799 "trsvcid": "4420", 00:28:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:32.799 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:32.799 "hdgst": false, 00:28:32.799 "ddgst": false 00:28:32.799 }, 00:28:32.799 "method": "bdev_nvme_attach_controller" 00:28:32.799 },{ 00:28:32.799 "params": { 00:28:32.799 "name": "Nvme8", 00:28:32.799 "trtype": "tcp", 00:28:32.799 "traddr": "10.0.0.2", 00:28:32.799 "adrfam": "ipv4", 00:28:32.799 "trsvcid": "4420", 00:28:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:32.799 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:32.799 "hdgst": false, 00:28:32.799 "ddgst": false 00:28:32.799 }, 00:28:32.799 "method": "bdev_nvme_attach_controller" 00:28:32.799 },{ 00:28:32.799 "params": { 00:28:32.799 "name": "Nvme9", 00:28:32.799 "trtype": "tcp", 00:28:32.799 "traddr": "10.0.0.2", 00:28:32.799 "adrfam": "ipv4", 00:28:32.799 "trsvcid": "4420", 00:28:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:32.799 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:32.799 "hdgst": false, 00:28:32.799 "ddgst": false 00:28:32.799 }, 00:28:32.799 "method": "bdev_nvme_attach_controller" 00:28:32.799 },{ 00:28:32.799 "params": { 00:28:32.799 "name": "Nvme10", 00:28:32.799 "trtype": "tcp", 00:28:32.799 "traddr": "10.0.0.2", 00:28:32.799 "adrfam": "ipv4", 00:28:32.799 "trsvcid": "4420", 00:28:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:32.799 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:32.799 "hdgst": false, 00:28:32.799 "ddgst": false 00:28:32.799 }, 00:28:32.799 "method": "bdev_nvme_attach_controller" 00:28:32.799 }' 00:28:32.799 [2024-12-07 10:03:01.476821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.799 [2024-12-07 10:03:01.517015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.367 Running I/O for 10 seconds... 00:28:34.663 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:34.663 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:34.663 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:34.663 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.663 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:34.663 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.663 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:34.663 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:34.663 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:34.663 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:34.663 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:34.663 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:34.664 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:34.664 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:34.664 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:34.664 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:34.664 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.664 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:34.664 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.664 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=14 00:28:34.664 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 14 -ge 100 ']' 00:28:34.664 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:34.923 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:34.923 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:34.923 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:34.923 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:34.923 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.923 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1362156 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1362156 ']' 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1362156 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1362156 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1362156' 00:28:35.204 killing process with pid 1362156 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1362156 00:28:35.204 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1362156 00:28:35.204 [2024-12-07 10:03:03.732370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.204 [2024-12-07 10:03:03.732689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.732835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa2f0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.734645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6ceb0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.734678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6ceb0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.734689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6ceb0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.734699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6ceb0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.734708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6ceb0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.205 [2024-12-07 10:03:03.735915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.735925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.735934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.735944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.735959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.735970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.735980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.735990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.736000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa7c0 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.738715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb180 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.206 [2024-12-07 10:03:03.739558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.739873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb650 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.740999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.207 [2024-12-07 10:03:03.741134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.741400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbb40 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.208 [2024-12-07 10:03:03.742753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.742873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc010 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.743997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.744003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.744009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.744015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.744021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.744028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.209 [2024-12-07 10:03:03.744034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.210 [2024-12-07 10:03:03.744042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.210 [2024-12-07 10:03:03.744048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.210 [2024-12-07 10:03:03.744054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.210 [2024-12-07 10:03:03.744060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.210 [2024-12-07 10:03:03.744066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.210 [2024-12-07 10:03:03.744072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc500 is same with the state(6) to be set 00:28:35.210 [2024-12-07 10:03:03.746460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.746986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.746993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.747002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.747008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.747017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.747024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.747033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.747040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.210 [2024-12-07 10:03:03.747053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.210 [2024-12-07 10:03:03.747060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.211 [2024-12-07 10:03:03.747485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747576] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28e7180 was disconnected and freed. reset controller. 00:28:35.211 [2024-12-07 10:03:03.747636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.211 [2024-12-07 10:03:03.747646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.211 [2024-12-07 10:03:03.747660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.211 [2024-12-07 10:03:03.747675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.211 [2024-12-07 10:03:03.747689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e6090 is same with the state(6) to be set 00:28:35.211 [2024-12-07 10:03:03.747717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.211 [2024-12-07 10:03:03.747726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.211 [2024-12-07 10:03:03.747740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.211 [2024-12-07 10:03:03.747754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.211 [2024-12-07 10:03:03.747770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271d80 is same with the state(6) to be set 00:28:35.211 [2024-12-07 10:03:03.747803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.211 [2024-12-07 10:03:03.747811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.211 [2024-12-07 10:03:03.747819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.211 [2024-12-07 10:03:03.747826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.747834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.747841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.747848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.747855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.747864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e6de0 is same with the state(6) to be set 00:28:35.212 [2024-12-07 10:03:03.747889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.747898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.747905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.747912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.747919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.747926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.747933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.747940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.747952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6610 is same with the state(6) to be set 00:28:35.212 [2024-12-07 10:03:03.747978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.747986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.747994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26da520 is same with the state(6) to be set 00:28:35.212 [2024-12-07 10:03:03.748063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ad5e0 is same with the state(6) to be set 00:28:35.212 [2024-12-07 10:03:03.748143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26b47d0 is same with the state(6) to be set 00:28:35.212 [2024-12-07 10:03:03.748227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278ee0 is same with the state(6) to be set 00:28:35.212 [2024-12-07 10:03:03.748309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271210 is same with the state(6) to be set 00:28:35.212 [2024-12-07 10:03:03.748392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.212 [2024-12-07 10:03:03.748443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278a60 is same with the state(6) to be set 00:28:35.212 [2024-12-07 10:03:03.748596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.212 [2024-12-07 10:03:03.748610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.212 [2024-12-07 10:03:03.748632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.212 [2024-12-07 10:03:03.748649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.212 [2024-12-07 10:03:03.748664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.212 [2024-12-07 10:03:03.748680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.212 [2024-12-07 10:03:03.748696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.212 [2024-12-07 10:03:03.748705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.748986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.748992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.213 [2024-12-07 10:03:03.749317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.213 [2024-12-07 10:03:03.749323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749660] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28f7090 was disconnected and freed. reset controller. 00:28:35.214 [2024-12-07 10:03:03.749939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.749990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.749998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.214 [2024-12-07 10:03:03.750191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.214 [2024-12-07 10:03:03.750198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.750697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.750705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.755551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.755566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.755577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.755586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.755593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.755603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.755609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.755618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.755625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.755633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.755640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.755649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.215 [2024-12-07 10:03:03.755655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.215 [2024-12-07 10:03:03.755664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.755670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.755679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.755685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.755694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.755701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.755709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.755716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.755725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.755732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.755740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.755746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.755755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.755761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.755771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.755778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.755787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.755793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.755856] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28e5c70 was disconnected and freed. reset controller. 00:28:35.216 [2024-12-07 10:03:03.759300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:35.216 [2024-12-07 10:03:03.759342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:35.216 [2024-12-07 10:03:03.759359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2271210 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.759372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e6090 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.759386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2271d80 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.759404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e6de0 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.759417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6610 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.759434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26da520 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.759451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ad5e0 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.759468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26b47d0 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.759485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2278ee0 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.759501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2278a60 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.760143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:35.216 [2024-12-07 10:03:03.760557] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.216 [2024-12-07 10:03:03.760613] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.216 [2024-12-07 10:03:03.760657] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.216 [2024-12-07 10:03:03.760701] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.216 [2024-12-07 10:03:03.760749] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.216 [2024-12-07 10:03:03.761138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.216 [2024-12-07 10:03:03.761157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26e6090 with addr=10.0.0.2, port=4420 00:28:35.216 [2024-12-07 10:03:03.761167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e6090 is same with the state(6) to be set 00:28:35.216 [2024-12-07 10:03:03.761339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.216 [2024-12-07 10:03:03.761351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2271210 with addr=10.0.0.2, port=4420 00:28:35.216 [2024-12-07 10:03:03.761359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271210 is same with the state(6) to be set 00:28:35.216 [2024-12-07 10:03:03.761462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.216 [2024-12-07 10:03:03.761480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26e6de0 with addr=10.0.0.2, port=4420 00:28:35.216 [2024-12-07 10:03:03.761488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e6de0 is same with the state(6) to be set 00:28:35.216 [2024-12-07 10:03:03.761545] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.216 [2024-12-07 10:03:03.761594] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.216 [2024-12-07 10:03:03.761684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e6090 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.761698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2271210 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.761708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e6de0 (9): Bad file descriptor 00:28:35.216 [2024-12-07 10:03:03.761761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:35.216 [2024-12-07 10:03:03.761771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:35.216 [2024-12-07 10:03:03.761781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:35.216 [2024-12-07 10:03:03.761795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:35.216 [2024-12-07 10:03:03.761802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:35.216 [2024-12-07 10:03:03.761809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:35.216 [2024-12-07 10:03:03.761820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:35.216 [2024-12-07 10:03:03.761827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:35.216 [2024-12-07 10:03:03.761833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:35.216 [2024-12-07 10:03:03.761876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.216 [2024-12-07 10:03:03.761884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.216 [2024-12-07 10:03:03.761891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.216 [2024-12-07 10:03:03.769444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.769462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.769476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.769484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.769493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.769500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.769509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.769516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.769525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.769531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.769545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.769552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.769560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.769567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.769576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.769583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.769591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.769598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.769607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.769613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.769622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.769629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.216 [2024-12-07 10:03:03.769637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.216 [2024-12-07 10:03:03.769644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.769986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.769995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.217 [2024-12-07 10:03:03.770247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.217 [2024-12-07 10:03:03.770255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.770446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.770453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227b7e0 is same with the state(6) to be set 00:28:35.218 [2024-12-07 10:03:03.771476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.218 [2024-12-07 10:03:03.771822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.218 [2024-12-07 10:03:03.771830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.771836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.771845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.771852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.771860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.771867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.771875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.771881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.771890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.771897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.771906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.771912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.771922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.771929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.771937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.771944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.771956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.771963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.771971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.771978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.771986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.771993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.219 [2024-12-07 10:03:03.772429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.219 [2024-12-07 10:03:03.772436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.772444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.772451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.772459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.772466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.772474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227c980 is same with the state(6) to be set 00:28:35.220 [2024-12-07 10:03:03.773489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.773990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.773998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.774005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.774013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.774020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.774028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.774035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.774043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.220 [2024-12-07 10:03:03.774050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.220 [2024-12-07 10:03:03.774058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.774474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.774483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28f8470 is same with the state(6) to be set 00:28:35.221 [2024-12-07 10:03:03.775498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.775514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.775525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.775532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.775541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.775548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.775557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.775564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.775573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.775579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.775589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.775596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.775604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.775611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.775619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.775626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.775634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.775641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.775649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.775656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.775665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.221 [2024-12-07 10:03:03.775671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.221 [2024-12-07 10:03:03.775680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.775989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.775997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.222 [2024-12-07 10:03:03.776280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.222 [2024-12-07 10:03:03.776289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.776494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.776502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28f9960 is same with the state(6) to be set 00:28:35.223 [2024-12-07 10:03:03.777527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.777987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.777993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.778001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.778008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.778016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.223 [2024-12-07 10:03:03.778024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.223 [2024-12-07 10:03:03.778032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.778510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.778517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2684ce0 is same with the state(6) to be set 00:28:35.224 [2024-12-07 10:03:03.779531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.224 [2024-12-07 10:03:03.779838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.224 [2024-12-07 10:03:03.779846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.779853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.779863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.779870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.779878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.779884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.779893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.779900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.779908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.779914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.779922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.779929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.779937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.779944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.779956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.779963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.779971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.779978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.779986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.779992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.780348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.780356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.785106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.785124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.785132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.785140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.785148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.785157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.785164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.785172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.785179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.785187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.785198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.785206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.785213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.785221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.785228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.785236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.785243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.785251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.785257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.785266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.785272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.785280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28fae90 is same with the state(6) to be set 00:28:35.225 [2024-12-07 10:03:03.786313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.786328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.786341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.786349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.786360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.786367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.786377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.786384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.786394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.786402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.786412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.786419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.786429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.786440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.786450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.225 [2024-12-07 10:03:03.786458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.225 [2024-12-07 10:03:03.786467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.786983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.786991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.226 [2024-12-07 10:03:03.787288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.226 [2024-12-07 10:03:03.787298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.227 [2024-12-07 10:03:03.787307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.227 [2024-12-07 10:03:03.787315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.227 [2024-12-07 10:03:03.787324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.227 [2024-12-07 10:03:03.787331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.227 [2024-12-07 10:03:03.787340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.227 [2024-12-07 10:03:03.787348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.227 [2024-12-07 10:03:03.787357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.227 [2024-12-07 10:03:03.787364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.227 [2024-12-07 10:03:03.787374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.227 [2024-12-07 10:03:03.787381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.227 [2024-12-07 10:03:03.787390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.227 [2024-12-07 10:03:03.787398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.227 [2024-12-07 10:03:03.787407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.227 [2024-12-07 10:03:03.787414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.227 [2024-12-07 10:03:03.787423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28e4870 is same with the state(6) to be set 00:28:35.227 [2024-12-07 10:03:03.788532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:35.227 [2024-12-07 10:03:03.788549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:35.227 [2024-12-07 10:03:03.788559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:35.227 [2024-12-07 10:03:03.788569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:35.227 [2024-12-07 10:03:03.788659] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:35.227 [2024-12-07 10:03:03.788673] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:35.227 [2024-12-07 10:03:03.788686] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:35.227 [2024-12-07 10:03:03.788763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:35.227 [2024-12-07 10:03:03.788774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:35.227 task offset: 21760 on job bdev=Nvme10n1 fails 00:28:35.227 00:28:35.227 Latency(us) 00:28:35.227 [2024-12-07T09:03:03.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.227 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.227 Job: Nvme1n1 ended in about 0.73 seconds with error 00:28:35.227 Verification LBA range: start 0x0 length 0x400 00:28:35.227 Nvme1n1 : 0.73 176.09 11.01 88.05 0.00 239326.31 16640.45 222480.47 00:28:35.227 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.227 Job: Nvme2n1 ended in about 0.73 seconds with error 00:28:35.227 Verification LBA range: start 0x0 length 0x400 00:28:35.227 Nvme2n1 : 0.73 182.47 11.40 87.80 0.00 228801.06 10884.67 216097.84 00:28:35.227 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.227 Job: Nvme3n1 ended in about 0.71 seconds with error 00:28:35.227 Verification LBA range: start 0x0 length 0x400 00:28:35.227 Nvme3n1 : 0.71 269.10 16.82 89.70 0.00 168154.94 11340.58 224304.08 00:28:35.227 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.227 Job: Nvme4n1 ended in about 0.73 seconds with error 00:28:35.227 Verification LBA range: start 0x0 length 0x400 00:28:35.227 Nvme4n1 : 0.73 175.12 10.95 87.56 0.00 224812.82 18008.15 232510.33 00:28:35.227 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.227 Job: Nvme5n1 ended in about 0.73 seconds with error 00:28:35.227 Verification LBA range: start 0x0 length 0x400 00:28:35.227 Nvme5n1 : 0.73 174.64 10.92 87.32 0.00 220148.72 32369.09 224304.08 00:28:35.227 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.227 Job: Nvme6n1 ended in about 0.73 seconds with error 00:28:35.227 Verification LBA range: start 0x0 length 0x400 00:28:35.227 Nvme6n1 : 0.73 174.17 10.89 87.08 0.00 215519.65 18464.06 221568.67 00:28:35.227 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.227 Job: Nvme7n1 ended in about 0.74 seconds with error 00:28:35.227 Verification LBA range: start 0x0 length 0x400 00:28:35.227 Nvme7n1 : 0.74 172.58 10.79 86.29 0.00 212442.90 17210.32 221568.67 00:28:35.227 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.227 Job: Nvme8n1 ended in about 0.74 seconds with error 00:28:35.227 Verification LBA range: start 0x0 length 0x400 00:28:35.227 Nvme8n1 : 0.74 177.46 11.09 86.04 0.00 203639.02 16184.54 188743.68 00:28:35.227 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.227 Job: Nvme9n1 ended in about 0.71 seconds with error 00:28:35.227 Verification LBA range: start 0x0 length 0x400 00:28:35.227 Nvme9n1 : 0.71 179.09 11.19 89.54 0.00 192884.65 12081.42 226127.69 00:28:35.227 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.227 Job: Nvme10n1 ended in about 0.71 seconds with error 00:28:35.227 Verification LBA range: start 0x0 length 0x400 00:28:35.227 Nvme10n1 : 0.71 179.68 11.23 89.84 0.00 186856.55 16412.49 242540.19 00:28:35.227 [2024-12-07T09:03:03.953Z] =================================================================================================================== 00:28:35.227 [2024-12-07T09:03:03.953Z] Total : 1860.40 116.28 879.23 0.00 207976.47 10884.67 242540.19 00:28:35.227 [2024-12-07 10:03:03.819676] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:35.227 [2024-12-07 10:03:03.819729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:35.227 [2024-12-07 10:03:03.820090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.227 [2024-12-07 10:03:03.820110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2278ee0 with addr=10.0.0.2, port=4420 00:28:35.227 [2024-12-07 10:03:03.820121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278ee0 is same with the state(6) to be set 00:28:35.227 [2024-12-07 10:03:03.820353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.227 [2024-12-07 10:03:03.820364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2278a60 with addr=10.0.0.2, port=4420 00:28:35.227 [2024-12-07 10:03:03.820372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278a60 is same with the state(6) to be set 00:28:35.227 [2024-12-07 10:03:03.820605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.227 [2024-12-07 10:03:03.820616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2271d80 with addr=10.0.0.2, port=4420 00:28:35.227 [2024-12-07 10:03:03.820623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271d80 is same with the state(6) to be set 00:28:35.227 [2024-12-07 10:03:03.820861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.227 [2024-12-07 10:03:03.820872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26ad5e0 with addr=10.0.0.2, port=4420 00:28:35.227 [2024-12-07 10:03:03.820879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ad5e0 is same with the state(6) to be set 00:28:35.227 [2024-12-07 10:03:03.822431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:35.227 [2024-12-07 10:03:03.822450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:35.227 [2024-12-07 10:03:03.822760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.227 [2024-12-07 10:03:03.822775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26b47d0 with addr=10.0.0.2, port=4420 00:28:35.227 [2024-12-07 10:03:03.822783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26b47d0 is same with the state(6) to be set 00:28:35.227 [2024-12-07 10:03:03.823019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.227 [2024-12-07 10:03:03.823030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6610 with addr=10.0.0.2, port=4420 00:28:35.227 [2024-12-07 10:03:03.823038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6610 is same with the state(6) to be set 00:28:35.227 [2024-12-07 10:03:03.823212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.227 [2024-12-07 10:03:03.823223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26da520 with addr=10.0.0.2, port=4420 00:28:35.227 [2024-12-07 10:03:03.823230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26da520 is same with the state(6) to be set 00:28:35.227 [2024-12-07 10:03:03.823242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2278ee0 (9): Bad file descriptor 00:28:35.227 [2024-12-07 10:03:03.823254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2278a60 (9): Bad file descriptor 00:28:35.227 [2024-12-07 10:03:03.823263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2271d80 (9): Bad file descriptor 00:28:35.227 [2024-12-07 10:03:03.823271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ad5e0 (9): Bad file descriptor 00:28:35.227 [2024-12-07 10:03:03.823298] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:35.227 [2024-12-07 10:03:03.823311] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:35.227 [2024-12-07 10:03:03.823320] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:35.227 [2024-12-07 10:03:03.823332] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:35.227 [2024-12-07 10:03:03.823341] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:35.227 [2024-12-07 10:03:03.823401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:35.227 [2024-12-07 10:03:03.823638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.227 [2024-12-07 10:03:03.823652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26e6de0 with addr=10.0.0.2, port=4420 00:28:35.227 [2024-12-07 10:03:03.823660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e6de0 is same with the state(6) to be set 00:28:35.227 [2024-12-07 10:03:03.823898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.227 [2024-12-07 10:03:03.823910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2271210 with addr=10.0.0.2, port=4420 00:28:35.227 [2024-12-07 10:03:03.823917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271210 is same with the state(6) to be set 00:28:35.227 [2024-12-07 10:03:03.823926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26b47d0 (9): Bad file descriptor 00:28:35.227 [2024-12-07 10:03:03.823935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6610 (9): Bad file descriptor 00:28:35.227 [2024-12-07 10:03:03.823944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26da520 (9): Bad file descriptor 00:28:35.227 [2024-12-07 10:03:03.823957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:35.227 [2024-12-07 10:03:03.823964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:35.227 [2024-12-07 10:03:03.823973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:35.227 [2024-12-07 10:03:03.823984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:35.227 [2024-12-07 10:03:03.823990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:35.227 [2024-12-07 10:03:03.823997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:35.227 [2024-12-07 10:03:03.824006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:35.227 [2024-12-07 10:03:03.824013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:35.227 [2024-12-07 10:03:03.824019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:35.227 [2024-12-07 10:03:03.824030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:35.227 [2024-12-07 10:03:03.824036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:35.227 [2024-12-07 10:03:03.824042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:35.227 [2024-12-07 10:03:03.824120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.227 [2024-12-07 10:03:03.824128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.227 [2024-12-07 10:03:03.824134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.227 [2024-12-07 10:03:03.824140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.227 [2024-12-07 10:03:03.824318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.227 [2024-12-07 10:03:03.824330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26e6090 with addr=10.0.0.2, port=4420 00:28:35.227 [2024-12-07 10:03:03.824337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e6090 is same with the state(6) to be set 00:28:35.227 [2024-12-07 10:03:03.824346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e6de0 (9): Bad file descriptor 00:28:35.227 [2024-12-07 10:03:03.824354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2271210 (9): Bad file descriptor 00:28:35.227 [2024-12-07 10:03:03.824362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:35.228 [2024-12-07 10:03:03.824368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:35.228 [2024-12-07 10:03:03.824374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:35.228 [2024-12-07 10:03:03.824386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:35.228 [2024-12-07 10:03:03.824392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:35.228 [2024-12-07 10:03:03.824399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:35.228 [2024-12-07 10:03:03.824407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:35.228 [2024-12-07 10:03:03.824413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:35.228 [2024-12-07 10:03:03.824419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:35.228 [2024-12-07 10:03:03.824445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.228 [2024-12-07 10:03:03.824452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.228 [2024-12-07 10:03:03.824458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.228 [2024-12-07 10:03:03.824465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e6090 (9): Bad file descriptor 00:28:35.228 [2024-12-07 10:03:03.824472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:35.228 [2024-12-07 10:03:03.824478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:35.228 [2024-12-07 10:03:03.824484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:35.228 [2024-12-07 10:03:03.824492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:35.228 [2024-12-07 10:03:03.824498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:35.228 [2024-12-07 10:03:03.824505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:35.228 [2024-12-07 10:03:03.824530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.228 [2024-12-07 10:03:03.824537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.228 [2024-12-07 10:03:03.824543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:35.228 [2024-12-07 10:03:03.824549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:35.228 [2024-12-07 10:03:03.824555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:35.228 [2024-12-07 10:03:03.824577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.486 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # nvmfpid= 00:28:35.486 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # sleep 1 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # kill -9 1362319 00:28:36.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 143: kill: (1362319) - No such process 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # true 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@145 -- # stoptarget 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.865 rmmod nvme_tcp 00:28:36.865 rmmod nvme_fabrics 00:28:36.865 rmmod nvme_keyring 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.865 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:38.769 00:28:38.769 real 0m7.058s 00:28:38.769 user 0m16.236s 00:28:38.769 sys 0m1.182s 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:38.769 ************************************ 00:28:38.769 END TEST nvmf_shutdown_tc3 00:28:38.769 ************************************ 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ e810 == \e\8\1\0 ]] 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ tcp == \r\d\m\a ]] 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@174 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:38.769 ************************************ 00:28:38.769 START TEST nvmf_shutdown_tc4 00:28:38.769 ************************************ 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # starttarget 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.769 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:38.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:38.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:38.770 Found net devices under 0000:86:00.0: cvl_0_0 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:38.770 Found net devices under 0000:86:00.1: cvl_0_1 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.770 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:39.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:28:39.030 00:28:39.030 --- 10.0.0.2 ping statistics --- 00:28:39.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.030 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:28:39.030 00:28:39.030 --- 10.0.0.1 ping statistics --- 00:28:39.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.030 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=1363516 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 1363516 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1363516 ']' 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.030 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.030 [2024-12-07 10:03:07.697357] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:39.030 [2024-12-07 10:03:07.697406] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.289 [2024-12-07 10:03:07.757121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.289 [2024-12-07 10:03:07.800300] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.289 [2024-12-07 10:03:07.800340] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.289 [2024-12-07 10:03:07.800348] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.289 [2024-12-07 10:03:07.800354] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.289 [2024-12-07 10:03:07.800360] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.289 [2024-12-07 10:03:07.800456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.289 [2024-12-07 10:03:07.800485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.289 [2024-12-07 10:03:07.800573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.289 [2024-12-07 10:03:07.800574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.289 [2024-12-07 10:03:07.946103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:39.289 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:39.289 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:39.289 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.289 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.547 Malloc1 00:28:39.547 [2024-12-07 10:03:08.046044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.547 Malloc2 00:28:39.547 Malloc3 00:28:39.547 Malloc4 00:28:39.547 Malloc5 00:28:39.547 Malloc6 00:28:39.805 Malloc7 00:28:39.805 Malloc8 00:28:39.805 Malloc9 00:28:39.805 Malloc10 00:28:39.805 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.805 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:39.805 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:39.805 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.806 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@154 -- # perfpid=1363784 00:28:39.806 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@153 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:39.806 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # sleep 5 00:28:39.806 [2024-12-07 10:03:08.527633] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:45.082 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@157 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:45.082 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@160 -- # killprocess 1363516 00:28:45.082 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1363516 ']' 00:28:45.082 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1363516 00:28:45.082 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:28:45.083 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:45.083 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1363516 00:28:45.083 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:45.083 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:45.083 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1363516' 00:28:45.083 killing process with pid 1363516 00:28:45.083 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1363516 00:28:45.083 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1363516 00:28:45.083 [2024-12-07 10:03:13.551297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822fd0 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.551358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822fd0 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.551716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8234a0 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.551748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8234a0 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.551759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8234a0 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.551768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8234a0 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.551776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8234a0 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.551784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8234a0 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.551793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8234a0 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.552453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823970 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.553192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822b00 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.553220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822b00 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.553228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822b00 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.553235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822b00 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.553242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822b00 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.553248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822b00 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.553254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822b00 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.553260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822b00 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.553266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822b00 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.553273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822b00 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.553279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822b00 is same with the state(6) to be set 00:28:45.083 [2024-12-07 10:03:13.553286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x822b00 is same with the state(6) to be set 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 [2024-12-07 10:03:13.560145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.083 starting I/O failed: -6 00:28:45.083 starting I/O failed: -6 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 starting I/O failed: -6 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.083 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 [2024-12-07 10:03:13.561117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 [2024-12-07 10:03:13.561250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90710 is same with the state(6) to be set 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 [2024-12-07 10:03:13.561274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90710 is same with the state(6) to be set 00:28:45.084 [2024-12-07 10:03:13.561282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90710 is same with tWrite completed with error (sct=0, sc=8) 00:28:45.084 he state(6) to be set 00:28:45.084 [2024-12-07 10:03:13.561295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90710 is same with the state(6) to be set 00:28:45.084 [2024-12-07 10:03:13.561301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90710 is same with tWrite completed with error (sct=0, sc=8) 00:28:45.084 he state(6) to be set 00:28:45.084 starting I/O failed: -6 00:28:45.084 [2024-12-07 10:03:13.561309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90710 is same with the state(6) to be set 00:28:45.084 [2024-12-07 10:03:13.561316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90710 is same with the state(6) to be set 00:28:45.084 [2024-12-07 10:03:13.561323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90710 is same with tWrite completed with error (sct=0, sc=8) 00:28:45.084 he state(6) to be set 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 [2024-12-07 10:03:13.561637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90be0 is same with tstarting I/O failed: -6 00:28:45.084 he state(6) to be set 00:28:45.084 [2024-12-07 10:03:13.561660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90be0 is same with tWrite completed with error (sct=0, sc=8) 00:28:45.084 he state(6) to be set 00:28:45.084 [2024-12-07 10:03:13.561673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90be0 is same with the state(6) to be set 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 [2024-12-07 10:03:13.561683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90be0 is same with the state(6) to be set 00:28:45.084 [2024-12-07 10:03:13.561694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90be0 is same with the state(6) to be set 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 [2024-12-07 10:03:13.561704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90be0 is same with the state(6) to be set 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 [2024-12-07 10:03:13.562177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.084 Write completed with error (sct=0, sc=8) 00:28:45.084 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 [2024-12-07 10:03:13.563887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.085 NVMe io qpair process completion error 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 [2024-12-07 10:03:13.565017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 [2024-12-07 10:03:13.565621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa92d90 is same with the state(6) to be set 00:28:45.085 starting I/O failed: -6 00:28:45.085 [2024-12-07 10:03:13.565636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa92d90 is same with the state(6) to be set 00:28:45.085 [2024-12-07 10:03:13.565643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa92d90 is same with the state(6) to be set 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 [2024-12-07 10:03:13.565650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa92d90 is same with the state(6) to be set 00:28:45.085 [2024-12-07 10:03:13.565656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa92d90 is same with the state(6) to be set 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 [2024-12-07 10:03:13.565662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa92d90 is same with the state(6) to be set 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 [2024-12-07 10:03:13.565904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.085 [2024-12-07 10:03:13.565965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93260 is same with the state(6) to be set 00:28:45.085 [2024-12-07 10:03:13.565985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93260 is same with the state(6) to be set 00:28:45.085 [2024-12-07 10:03:13.565995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93260 is same with the state(6) to be set 00:28:45.085 [2024-12-07 10:03:13.566005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93260 is same with the state(6) to be set 00:28:45.085 [2024-12-07 10:03:13.566016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93260 is same with the state(6) to be set 00:28:45.085 [2024-12-07 10:03:13.566025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93260 is same with the state(6) to be set 00:28:45.085 starting I/O failed: -6 00:28:45.085 starting I/O failed: -6 00:28:45.085 starting I/O failed: -6 00:28:45.085 starting I/O failed: -6 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.085 starting I/O failed: -6 00:28:45.085 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 [2024-12-07 10:03:13.567097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 [2024-12-07 10:03:13.569167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.086 NVMe io qpair process completion error 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 Write completed with error (sct=0, sc=8) 00:28:45.086 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 [2024-12-07 10:03:13.570395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 [2024-12-07 10:03:13.571226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 [2024-12-07 10:03:13.572281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.087 starting I/O failed: -6 00:28:45.087 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 [2024-12-07 10:03:13.574398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.088 NVMe io qpair process completion error 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 [2024-12-07 10:03:13.575385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 Write completed with error (sct=0, sc=8) 00:28:45.088 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 [2024-12-07 10:03:13.576297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 [2024-12-07 10:03:13.577304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 [2024-12-07 10:03:13.579172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.090 NVMe io qpair process completion error 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 [2024-12-07 10:03:13.580789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 [2024-12-07 10:03:13.581601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed: -6 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 [2024-12-07 10:03:13.582674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 [2024-12-07 10:03:13.585974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.091 NVMe io qpair process completion error 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 starting I/O failed: -6 00:28:45.091 starting I/O failed: -6 00:28:45.091 starting I/O failed: -6 00:28:45.091 starting I/O failed: -6 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.091 Write completed with error (sct=0, sc=8) 00:28:45.091 starting I/O failed: -6 00:28:45.092 [2024-12-07 10:03:13.587990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 [2024-12-07 10:03:13.589085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.092 NVMe io qpair process completion error 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 [2024-12-07 10:03:13.590035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.092 starting I/O failed: -6 00:28:45.092 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 [2024-12-07 10:03:13.590980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 [2024-12-07 10:03:13.592044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.093 Write completed with error (sct=0, sc=8) 00:28:45.093 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 [2024-12-07 10:03:13.593607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.094 NVMe io qpair process completion error 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 [2024-12-07 10:03:13.594639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 [2024-12-07 10:03:13.595591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.094 Write completed with error (sct=0, sc=8) 00:28:45.094 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 [2024-12-07 10:03:13.596580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 [2024-12-07 10:03:13.598691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.095 NVMe io qpair process completion error 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 [2024-12-07 10:03:13.599679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 Write completed with error (sct=0, sc=8) 00:28:45.095 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 [2024-12-07 10:03:13.600633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 [2024-12-07 10:03:13.601641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.096 Write completed with error (sct=0, sc=8) 00:28:45.096 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 [2024-12-07 10:03:13.610380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.097 NVMe io qpair process completion error 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 [2024-12-07 10:03:13.611410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 [2024-12-07 10:03:13.612354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 Write completed with error (sct=0, sc=8) 00:28:45.097 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 [2024-12-07 10:03:13.613362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 starting I/O failed: -6 00:28:45.098 [2024-12-07 10:03:13.615209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.098 NVMe io qpair process completion error 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.098 Write completed with error (sct=0, sc=8) 00:28:45.099 Write completed with error (sct=0, sc=8) 00:28:45.099 Write completed with error (sct=0, sc=8) 00:28:45.099 Write completed with error (sct=0, sc=8) 00:28:45.099 Write completed with error (sct=0, sc=8) 00:28:45.099 Write completed with error (sct=0, sc=8) 00:28:45.099 Write completed with error (sct=0, sc=8) 00:28:45.099 Write completed with error (sct=0, sc=8) 00:28:45.099 Write completed with error (sct=0, sc=8) 00:28:45.099 Write completed with error (sct=0, sc=8) 00:28:45.099 Write completed with error (sct=0, sc=8) 00:28:45.099 Initializing NVMe Controllers 00:28:45.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:45.099 Controller IO queue size 128, less than required. 00:28:45.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.099 Controller IO queue size 128, less than required. 00:28:45.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:45.099 Controller IO queue size 128, less than required. 00:28:45.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:45.099 Controller IO queue size 128, less than required. 00:28:45.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:45.099 Controller IO queue size 128, less than required. 00:28:45.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:45.099 Controller IO queue size 128, less than required. 00:28:45.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:45.099 Controller IO queue size 128, less than required. 00:28:45.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:45.099 Controller IO queue size 128, less than required. 00:28:45.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:45.099 Controller IO queue size 128, less than required. 00:28:45.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:45.099 Controller IO queue size 128, less than required. 00:28:45.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:45.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:45.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:45.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:45.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:45.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:45.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:45.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:45.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:45.099 Initialization complete. Launching workers. 00:28:45.099 ======================================================== 00:28:45.099 Latency(us) 00:28:45.099 Device Information : IOPS MiB/s Average min max 00:28:45.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2144.41 92.14 59695.83 683.20 109447.43 00:28:45.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2163.30 92.95 59190.55 769.47 129834.73 00:28:45.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2178.55 93.61 58796.55 775.38 107205.33 00:28:45.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2180.69 93.70 58755.68 751.42 127777.24 00:28:45.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2175.97 93.50 58905.64 808.27 105830.81 00:28:45.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2160.73 92.84 59508.80 704.37 101204.93 00:28:45.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2135.18 91.75 60054.82 851.19 109787.18 00:28:45.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2112.21 90.76 60720.57 685.93 111915.63 00:28:45.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2148.92 92.34 59702.01 649.92 114612.38 00:28:45.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2108.35 90.59 60953.97 691.66 124707.41 00:28:45.099 ======================================================== 00:28:45.099 Total : 21508.32 924.19 59620.09 649.92 129834.73 00:28:45.099 00:28:45.099 [2024-12-07 10:03:13.620864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f484a0 is same with the state(6) to be set 00:28:45.099 [2024-12-07 10:03:13.620913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f47e40 is same with the state(6) to be set 00:28:45.099 [2024-12-07 10:03:13.620942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fce080 is same with the state(6) to be set 00:28:45.099 [2024-12-07 10:03:13.620979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf370 is same with the state(6) to be set 00:28:45.099 [2024-12-07 10:03:13.621007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f487d0 is same with the state(6) to be set 00:28:45.099 [2024-12-07 10:03:13.621036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f48170 is same with the state(6) to be set 00:28:45.099 [2024-12-07 10:03:13.621064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9180 is same with the state(6) to be set 00:28:45.099 [2024-12-07 10:03:13.621094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc4280 is same with the state(6) to be set 00:28:45.099 [2024-12-07 10:03:13.621122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7e80 is same with the state(6) to be set 00:28:45.099 [2024-12-07 10:03:13.621153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2f80 is same with the state(6) to be set 00:28:45.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:45.359 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@161 -- # nvmfpid= 00:28:45.359 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@164 -- # sleep 1 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # wait 1363784 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # true 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@166 -- # stoptarget 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.296 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.296 rmmod nvme_tcp 00:28:46.296 rmmod nvme_fabrics 00:28:46.296 rmmod nvme_keyring 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.555 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.459 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:48.459 00:28:48.459 real 0m9.758s 00:28:48.459 user 0m24.845s 00:28:48.459 sys 0m5.194s 00:28:48.459 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:48.459 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:48.459 ************************************ 00:28:48.459 END TEST nvmf_shutdown_tc4 00:28:48.459 ************************************ 00:28:48.459 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@177 -- # trap - SIGINT SIGTERM EXIT 00:28:48.459 00:28:48.459 real 0m39.676s 00:28:48.459 user 1m37.191s 00:28:48.459 sys 0m13.618s 00:28:48.459 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:48.459 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:48.459 ************************************ 00:28:48.459 END TEST nvmf_shutdown 00:28:48.459 ************************************ 00:28:48.718 10:03:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:48.718 00:28:48.718 real 18m6.272s 00:28:48.718 user 48m48.166s 00:28:48.718 sys 4m23.755s 00:28:48.718 10:03:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:48.718 10:03:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:48.718 ************************************ 00:28:48.718 END TEST nvmf_target_extra 00:28:48.718 ************************************ 00:28:48.718 10:03:17 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:48.718 10:03:17 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:48.718 10:03:17 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:48.718 10:03:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:48.718 ************************************ 00:28:48.718 START TEST nvmf_host 00:28:48.718 ************************************ 00:28:48.718 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:48.718 * Looking for test storage... 00:28:48.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:48.718 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:48.718 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:28:48.718 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:48.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.977 --rc genhtml_branch_coverage=1 00:28:48.977 --rc genhtml_function_coverage=1 00:28:48.977 --rc genhtml_legend=1 00:28:48.977 --rc geninfo_all_blocks=1 00:28:48.977 --rc geninfo_unexecuted_blocks=1 00:28:48.977 00:28:48.977 ' 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:48.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.977 --rc genhtml_branch_coverage=1 00:28:48.977 --rc genhtml_function_coverage=1 00:28:48.977 --rc genhtml_legend=1 00:28:48.977 --rc geninfo_all_blocks=1 00:28:48.977 --rc geninfo_unexecuted_blocks=1 00:28:48.977 00:28:48.977 ' 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:48.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.977 --rc genhtml_branch_coverage=1 00:28:48.977 --rc genhtml_function_coverage=1 00:28:48.977 --rc genhtml_legend=1 00:28:48.977 --rc geninfo_all_blocks=1 00:28:48.977 --rc geninfo_unexecuted_blocks=1 00:28:48.977 00:28:48.977 ' 00:28:48.977 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:48.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.977 --rc genhtml_branch_coverage=1 00:28:48.977 --rc genhtml_function_coverage=1 00:28:48.977 --rc genhtml_legend=1 00:28:48.978 --rc geninfo_all_blocks=1 00:28:48.978 --rc geninfo_unexecuted_blocks=1 00:28:48.978 00:28:48.978 ' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.978 ************************************ 00:28:48.978 START TEST nvmf_multicontroller 00:28:48.978 ************************************ 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:48.978 * Looking for test storage... 00:28:48.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:48.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.978 --rc genhtml_branch_coverage=1 00:28:48.978 --rc genhtml_function_coverage=1 00:28:48.978 --rc genhtml_legend=1 00:28:48.978 --rc geninfo_all_blocks=1 00:28:48.978 --rc geninfo_unexecuted_blocks=1 00:28:48.978 00:28:48.978 ' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:48.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.978 --rc genhtml_branch_coverage=1 00:28:48.978 --rc genhtml_function_coverage=1 00:28:48.978 --rc genhtml_legend=1 00:28:48.978 --rc geninfo_all_blocks=1 00:28:48.978 --rc geninfo_unexecuted_blocks=1 00:28:48.978 00:28:48.978 ' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:48.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.978 --rc genhtml_branch_coverage=1 00:28:48.978 --rc genhtml_function_coverage=1 00:28:48.978 --rc genhtml_legend=1 00:28:48.978 --rc geninfo_all_blocks=1 00:28:48.978 --rc geninfo_unexecuted_blocks=1 00:28:48.978 00:28:48.978 ' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:48.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.978 --rc genhtml_branch_coverage=1 00:28:48.978 --rc genhtml_function_coverage=1 00:28:48.978 --rc genhtml_legend=1 00:28:48.978 --rc geninfo_all_blocks=1 00:28:48.978 --rc geninfo_unexecuted_blocks=1 00:28:48.978 00:28:48.978 ' 00:28:48.978 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:28:48.979 10:03:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:54.252 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:54.253 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:54.253 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:54.253 Found net devices under 0000:86:00.0: cvl_0_0 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:54.253 Found net devices under 0000:86:00.1: cvl_0_1 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:54.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:28:54.253 00:28:54.253 --- 10.0.0.2 ping statistics --- 00:28:54.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.253 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:28:54.253 00:28:54.253 --- 10.0.0.1 ping statistics --- 00:28:54.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.253 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=1368593 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 1368593 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1368593 ']' 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.253 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:54.254 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.254 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:54.254 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:54.254 10:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.514 [2024-12-07 10:03:23.015556] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:54.514 [2024-12-07 10:03:23.015600] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.514 [2024-12-07 10:03:23.073943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:54.514 [2024-12-07 10:03:23.114985] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.514 [2024-12-07 10:03:23.115024] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.514 [2024-12-07 10:03:23.115031] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.514 [2024-12-07 10:03:23.115038] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.514 [2024-12-07 10:03:23.115043] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.514 [2024-12-07 10:03:23.115145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.514 [2024-12-07 10:03:23.115172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.514 [2024-12-07 10:03:23.115174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.514 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:54.514 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:28:54.514 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:54.514 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:54.514 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.775 [2024-12-07 10:03:23.245577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.775 Malloc0 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.775 [2024-12-07 10:03:23.303731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.775 [2024-12-07 10:03:23.311674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.775 Malloc1 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.775 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:54.776 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.776 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1368641 00:28:54.776 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:54.776 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:54.776 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1368641 /var/tmp/bdevperf.sock 00:28:54.776 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1368641 ']' 00:28:54.776 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:54.776 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:54.776 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:54.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:54.776 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:54.776 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.036 NVMe0n1 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.036 1 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:55.036 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.037 request: 00:28:55.037 { 00:28:55.037 "name": "NVMe0", 00:28:55.037 "trtype": "tcp", 00:28:55.037 "traddr": "10.0.0.2", 00:28:55.037 "adrfam": "ipv4", 00:28:55.037 "trsvcid": "4420", 00:28:55.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.037 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:55.037 "hostaddr": "10.0.0.1", 00:28:55.037 "prchk_reftag": false, 00:28:55.037 "prchk_guard": false, 00:28:55.037 "hdgst": false, 00:28:55.037 "ddgst": false, 00:28:55.037 "allow_unrecognized_csi": false, 00:28:55.037 "method": "bdev_nvme_attach_controller", 00:28:55.037 "req_id": 1 00:28:55.037 } 00:28:55.037 Got JSON-RPC error response 00:28:55.037 response: 00:28:55.037 { 00:28:55.037 "code": -114, 00:28:55.037 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:55.037 } 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:55.037 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.297 request: 00:28:55.297 { 00:28:55.297 "name": "NVMe0", 00:28:55.297 "trtype": "tcp", 00:28:55.297 "traddr": "10.0.0.2", 00:28:55.297 "adrfam": "ipv4", 00:28:55.297 "trsvcid": "4420", 00:28:55.297 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:55.297 "hostaddr": "10.0.0.1", 00:28:55.297 "prchk_reftag": false, 00:28:55.297 "prchk_guard": false, 00:28:55.297 "hdgst": false, 00:28:55.297 "ddgst": false, 00:28:55.297 "allow_unrecognized_csi": false, 00:28:55.297 "method": "bdev_nvme_attach_controller", 00:28:55.297 "req_id": 1 00:28:55.297 } 00:28:55.297 Got JSON-RPC error response 00:28:55.297 response: 00:28:55.297 { 00:28:55.297 "code": -114, 00:28:55.297 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:55.297 } 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.297 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.297 request: 00:28:55.297 { 00:28:55.297 "name": "NVMe0", 00:28:55.297 "trtype": "tcp", 00:28:55.297 "traddr": "10.0.0.2", 00:28:55.297 "adrfam": "ipv4", 00:28:55.297 "trsvcid": "4420", 00:28:55.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.297 "hostaddr": "10.0.0.1", 00:28:55.297 "prchk_reftag": false, 00:28:55.297 "prchk_guard": false, 00:28:55.297 "hdgst": false, 00:28:55.297 "ddgst": false, 00:28:55.298 "multipath": "disable", 00:28:55.298 "allow_unrecognized_csi": false, 00:28:55.298 "method": "bdev_nvme_attach_controller", 00:28:55.298 "req_id": 1 00:28:55.298 } 00:28:55.298 Got JSON-RPC error response 00:28:55.298 response: 00:28:55.298 { 00:28:55.298 "code": -114, 00:28:55.298 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:28:55.298 } 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.298 request: 00:28:55.298 { 00:28:55.298 "name": "NVMe0", 00:28:55.298 "trtype": "tcp", 00:28:55.298 "traddr": "10.0.0.2", 00:28:55.298 "adrfam": "ipv4", 00:28:55.298 "trsvcid": "4420", 00:28:55.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.298 "hostaddr": "10.0.0.1", 00:28:55.298 "prchk_reftag": false, 00:28:55.298 "prchk_guard": false, 00:28:55.298 "hdgst": false, 00:28:55.298 "ddgst": false, 00:28:55.298 "multipath": "failover", 00:28:55.298 "allow_unrecognized_csi": false, 00:28:55.298 "method": "bdev_nvme_attach_controller", 00:28:55.298 "req_id": 1 00:28:55.298 } 00:28:55.298 Got JSON-RPC error response 00:28:55.298 response: 00:28:55.298 { 00:28:55.298 "code": -114, 00:28:55.298 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:55.298 } 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.298 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.298 10:03:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.557 00:28:55.557 10:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.557 10:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:55.557 10:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:55.557 10:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.557 10:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.557 10:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.557 10:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:55.557 10:03:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:56.934 { 00:28:56.934 "results": [ 00:28:56.934 { 00:28:56.934 "job": "NVMe0n1", 00:28:56.934 "core_mask": "0x1", 00:28:56.934 "workload": "write", 00:28:56.934 "status": "finished", 00:28:56.934 "queue_depth": 128, 00:28:56.934 "io_size": 4096, 00:28:56.934 "runtime": 1.006752, 00:28:56.934 "iops": 22755.35583738597, 00:28:56.934 "mibps": 88.88810873978895, 00:28:56.934 "io_failed": 0, 00:28:56.934 "io_timeout": 0, 00:28:56.934 "avg_latency_us": 5607.379200352244, 00:28:56.934 "min_latency_us": 4331.074782608695, 00:28:56.934 "max_latency_us": 11568.528695652174 00:28:56.934 } 00:28:56.934 ], 00:28:56.934 "core_count": 1 00:28:56.934 } 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1368641 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1368641 ']' 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1368641 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1368641 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1368641' 00:28:56.934 killing process with pid 1368641 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1368641 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1368641 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:28:56.934 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:56.934 [2024-12-07 10:03:23.413922] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:56.934 [2024-12-07 10:03:23.413981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1368641 ] 00:28:56.934 [2024-12-07 10:03:23.470524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.934 [2024-12-07 10:03:23.512886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.934 [2024-12-07 10:03:24.128715] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 3a97df77-4081-45b3-924e-5f30ca886f7c already exists 00:28:56.934 [2024-12-07 10:03:24.128744] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:3a97df77-4081-45b3-924e-5f30ca886f7c alias for bdev NVMe1n1 00:28:56.934 [2024-12-07 10:03:24.128752] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:56.934 Running I/O for 1 seconds... 00:28:56.934 22749.00 IOPS, 88.86 MiB/s 00:28:56.934 Latency(us) 00:28:56.934 [2024-12-07T09:03:25.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.934 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:56.934 NVMe0n1 : 1.01 22755.36 88.89 0.00 0.00 5607.38 4331.07 11568.53 00:28:56.934 [2024-12-07T09:03:25.660Z] =================================================================================================================== 00:28:56.934 [2024-12-07T09:03:25.660Z] Total : 22755.36 88.89 0.00 0.00 5607.38 4331.07 11568.53 00:28:56.934 Received shutdown signal, test time was about 1.000000 seconds 00:28:56.934 00:28:56.934 Latency(us) 00:28:56.934 [2024-12-07T09:03:25.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.934 [2024-12-07T09:03:25.660Z] =================================================================================================================== 00:28:56.934 [2024-12-07T09:03:25.660Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.934 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:28:56.934 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.935 rmmod nvme_tcp 00:28:56.935 rmmod nvme_fabrics 00:28:56.935 rmmod nvme_keyring 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 1368593 ']' 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 1368593 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1368593 ']' 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1368593 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:56.935 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1368593 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1368593' 00:28:57.194 killing process with pid 1368593 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1368593 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1368593 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.194 10:03:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.727 10:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.727 00:28:59.727 real 0m10.462s 00:28:59.727 user 0m11.938s 00:28:59.727 sys 0m4.723s 00:28:59.727 10:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:59.727 10:03:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:59.727 ************************************ 00:28:59.727 END TEST nvmf_multicontroller 00:28:59.727 ************************************ 00:28:59.727 10:03:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:59.727 10:03:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:59.727 10:03:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:59.727 10:03:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.727 ************************************ 00:28:59.727 START TEST nvmf_aer 00:28:59.727 ************************************ 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:59.727 * Looking for test storage... 00:28:59.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:28:59.727 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:59.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.728 --rc genhtml_branch_coverage=1 00:28:59.728 --rc genhtml_function_coverage=1 00:28:59.728 --rc genhtml_legend=1 00:28:59.728 --rc geninfo_all_blocks=1 00:28:59.728 --rc geninfo_unexecuted_blocks=1 00:28:59.728 00:28:59.728 ' 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:59.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.728 --rc genhtml_branch_coverage=1 00:28:59.728 --rc genhtml_function_coverage=1 00:28:59.728 --rc genhtml_legend=1 00:28:59.728 --rc geninfo_all_blocks=1 00:28:59.728 --rc geninfo_unexecuted_blocks=1 00:28:59.728 00:28:59.728 ' 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:59.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.728 --rc genhtml_branch_coverage=1 00:28:59.728 --rc genhtml_function_coverage=1 00:28:59.728 --rc genhtml_legend=1 00:28:59.728 --rc geninfo_all_blocks=1 00:28:59.728 --rc geninfo_unexecuted_blocks=1 00:28:59.728 00:28:59.728 ' 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:59.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.728 --rc genhtml_branch_coverage=1 00:28:59.728 --rc genhtml_function_coverage=1 00:28:59.728 --rc genhtml_legend=1 00:28:59.728 --rc geninfo_all_blocks=1 00:28:59.728 --rc geninfo_unexecuted_blocks=1 00:28:59.728 00:28:59.728 ' 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.728 10:03:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.003 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:05.004 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:05.004 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:05.004 Found net devices under 0000:86:00.0: cvl_0_0 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:05.004 Found net devices under 0000:86:00.1: cvl_0_1 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:05.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:29:05.004 00:29:05.004 --- 10.0.0.2 ping statistics --- 00:29:05.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.004 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:29:05.004 00:29:05.004 --- 10.0.0.1 ping statistics --- 00:29:05.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.004 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=1372494 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 1372494 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1372494 ']' 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.004 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.004 [2024-12-07 10:03:33.525735] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:05.004 [2024-12-07 10:03:33.525782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.004 [2024-12-07 10:03:33.583859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:05.004 [2024-12-07 10:03:33.626774] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.004 [2024-12-07 10:03:33.626814] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.004 [2024-12-07 10:03:33.626821] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.004 [2024-12-07 10:03:33.626827] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.004 [2024-12-07 10:03:33.626832] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.004 [2024-12-07 10:03:33.626875] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.004 [2024-12-07 10:03:33.626980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.005 [2024-12-07 10:03:33.627067] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.005 [2024-12-07 10:03:33.627069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.264 [2024-12-07 10:03:33.776212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.264 Malloc0 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.264 [2024-12-07 10:03:33.822459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.264 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.264 [ 00:29:05.264 { 00:29:05.264 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:05.264 "subtype": "Discovery", 00:29:05.264 "listen_addresses": [], 00:29:05.264 "allow_any_host": true, 00:29:05.264 "hosts": [] 00:29:05.264 }, 00:29:05.264 { 00:29:05.264 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.264 "subtype": "NVMe", 00:29:05.264 "listen_addresses": [ 00:29:05.264 { 00:29:05.264 "trtype": "TCP", 00:29:05.264 "adrfam": "IPv4", 00:29:05.264 "traddr": "10.0.0.2", 00:29:05.264 "trsvcid": "4420" 00:29:05.264 } 00:29:05.264 ], 00:29:05.264 "allow_any_host": true, 00:29:05.264 "hosts": [], 00:29:05.264 "serial_number": "SPDK00000000000001", 00:29:05.264 "model_number": "SPDK bdev Controller", 00:29:05.264 "max_namespaces": 2, 00:29:05.264 "min_cntlid": 1, 00:29:05.264 "max_cntlid": 65519, 00:29:05.264 "namespaces": [ 00:29:05.264 { 00:29:05.264 "nsid": 1, 00:29:05.264 "bdev_name": "Malloc0", 00:29:05.265 "name": "Malloc0", 00:29:05.265 "nguid": "AABFEC140C6C4CC6A90D772F1BAA1BD6", 00:29:05.265 "uuid": "aabfec14-0c6c-4cc6-a90d-772f1baa1bd6" 00:29:05.265 } 00:29:05.265 ] 00:29:05.265 } 00:29:05.265 ] 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1372628 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:05.265 10:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.524 Malloc1 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.524 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.524 Asynchronous Event Request test 00:29:05.524 Attaching to 10.0.0.2 00:29:05.524 Attached to 10.0.0.2 00:29:05.524 Registering asynchronous event callbacks... 00:29:05.524 Starting namespace attribute notice tests for all controllers... 00:29:05.524 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:05.524 aer_cb - Changed Namespace 00:29:05.524 Cleaning up... 00:29:05.524 [ 00:29:05.524 { 00:29:05.524 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:05.524 "subtype": "Discovery", 00:29:05.524 "listen_addresses": [], 00:29:05.524 "allow_any_host": true, 00:29:05.524 "hosts": [] 00:29:05.524 }, 00:29:05.524 { 00:29:05.524 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.524 "subtype": "NVMe", 00:29:05.524 "listen_addresses": [ 00:29:05.524 { 00:29:05.524 "trtype": "TCP", 00:29:05.524 "adrfam": "IPv4", 00:29:05.524 "traddr": "10.0.0.2", 00:29:05.524 "trsvcid": "4420" 00:29:05.524 } 00:29:05.524 ], 00:29:05.524 "allow_any_host": true, 00:29:05.524 "hosts": [], 00:29:05.524 "serial_number": "SPDK00000000000001", 00:29:05.524 "model_number": "SPDK bdev Controller", 00:29:05.524 "max_namespaces": 2, 00:29:05.524 "min_cntlid": 1, 00:29:05.524 "max_cntlid": 65519, 00:29:05.524 "namespaces": [ 00:29:05.524 { 00:29:05.524 "nsid": 1, 00:29:05.524 "bdev_name": "Malloc0", 00:29:05.524 "name": "Malloc0", 00:29:05.524 "nguid": "AABFEC140C6C4CC6A90D772F1BAA1BD6", 00:29:05.525 "uuid": "aabfec14-0c6c-4cc6-a90d-772f1baa1bd6" 00:29:05.525 }, 00:29:05.525 { 00:29:05.525 "nsid": 2, 00:29:05.525 "bdev_name": "Malloc1", 00:29:05.525 "name": "Malloc1", 00:29:05.525 "nguid": "B314395280A0443CAF34C24308E89DA4", 00:29:05.525 "uuid": "b3143952-80a0-443c-af34-c24308e89da4" 00:29:05.525 } 00:29:05.525 ] 00:29:05.525 } 00:29:05.525 ] 00:29:05.525 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.525 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1372628 00:29:05.525 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:05.525 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.525 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.525 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.525 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:05.525 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.525 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.784 rmmod nvme_tcp 00:29:05.784 rmmod nvme_fabrics 00:29:05.784 rmmod nvme_keyring 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 1372494 ']' 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 1372494 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1372494 ']' 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1372494 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1372494 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1372494' 00:29:05.784 killing process with pid 1372494 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1372494 00:29:05.784 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1372494 00:29:06.044 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:06.044 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:06.044 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:06.044 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:06.044 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:29:06.044 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:29:06.044 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:06.044 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:06.044 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:06.044 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.044 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.044 10:03:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.948 10:03:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.948 00:29:07.948 real 0m8.611s 00:29:07.948 user 0m5.131s 00:29:07.948 sys 0m4.378s 00:29:07.948 10:03:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:07.948 10:03:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:07.948 ************************************ 00:29:07.948 END TEST nvmf_aer 00:29:07.948 ************************************ 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.208 ************************************ 00:29:08.208 START TEST nvmf_async_init 00:29:08.208 ************************************ 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:08.208 * Looking for test storage... 00:29:08.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:08.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.208 --rc genhtml_branch_coverage=1 00:29:08.208 --rc genhtml_function_coverage=1 00:29:08.208 --rc genhtml_legend=1 00:29:08.208 --rc geninfo_all_blocks=1 00:29:08.208 --rc geninfo_unexecuted_blocks=1 00:29:08.208 00:29:08.208 ' 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:08.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.208 --rc genhtml_branch_coverage=1 00:29:08.208 --rc genhtml_function_coverage=1 00:29:08.208 --rc genhtml_legend=1 00:29:08.208 --rc geninfo_all_blocks=1 00:29:08.208 --rc geninfo_unexecuted_blocks=1 00:29:08.208 00:29:08.208 ' 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:08.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.208 --rc genhtml_branch_coverage=1 00:29:08.208 --rc genhtml_function_coverage=1 00:29:08.208 --rc genhtml_legend=1 00:29:08.208 --rc geninfo_all_blocks=1 00:29:08.208 --rc geninfo_unexecuted_blocks=1 00:29:08.208 00:29:08.208 ' 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:08.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.208 --rc genhtml_branch_coverage=1 00:29:08.208 --rc genhtml_function_coverage=1 00:29:08.208 --rc genhtml_legend=1 00:29:08.208 --rc geninfo_all_blocks=1 00:29:08.208 --rc geninfo_unexecuted_blocks=1 00:29:08.208 00:29:08.208 ' 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:08.208 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:08.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=c9dca51c36ab4ae0bfff39007f578a0f 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:08.209 10:03:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.479 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:13.480 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:13.480 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:13.480 Found net devices under 0000:86:00.0: cvl_0_0 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:13.480 Found net devices under 0000:86:00.1: cvl_0_1 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.480 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.739 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.739 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.739 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.739 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.739 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:29:13.740 00:29:13.740 --- 10.0.0.2 ping statistics --- 00:29:13.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.740 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:29:13.740 00:29:13.740 --- 10.0.0.1 ping statistics --- 00:29:13.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.740 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=1376151 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 1376151 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1376151 ']' 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:13.740 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.999 [2024-12-07 10:03:42.466721] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:13.999 [2024-12-07 10:03:42.466767] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.999 [2024-12-07 10:03:42.525184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.999 [2024-12-07 10:03:42.565927] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.999 [2024-12-07 10:03:42.565970] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.999 [2024-12-07 10:03:42.565977] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.999 [2024-12-07 10:03:42.565983] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.999 [2024-12-07 10:03:42.565988] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.999 [2024-12-07 10:03:42.566022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.999 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:13.999 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:29:13.999 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:13.999 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:13.999 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:13.999 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.999 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:13.999 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.999 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.000 [2024-12-07 10:03:42.692064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.000 null0 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c9dca51c36ab4ae0bfff39007f578a0f 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.000 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.259 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.259 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:14.259 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.259 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.259 [2024-12-07 10:03:42.732288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.259 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.259 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:14.259 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.259 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.259 nvme0n1 00:29:14.259 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.259 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:14.259 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.259 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.259 [ 00:29:14.259 { 00:29:14.259 "name": "nvme0n1", 00:29:14.259 "aliases": [ 00:29:14.259 "c9dca51c-36ab-4ae0-bfff-39007f578a0f" 00:29:14.259 ], 00:29:14.259 "product_name": "NVMe disk", 00:29:14.259 "block_size": 512, 00:29:14.259 "num_blocks": 2097152, 00:29:14.259 "uuid": "c9dca51c-36ab-4ae0-bfff-39007f578a0f", 00:29:14.259 "numa_id": 1, 00:29:14.259 "assigned_rate_limits": { 00:29:14.259 "rw_ios_per_sec": 0, 00:29:14.259 "rw_mbytes_per_sec": 0, 00:29:14.259 "r_mbytes_per_sec": 0, 00:29:14.259 "w_mbytes_per_sec": 0 00:29:14.259 }, 00:29:14.259 "claimed": false, 00:29:14.259 "zoned": false, 00:29:14.259 "supported_io_types": { 00:29:14.259 "read": true, 00:29:14.259 "write": true, 00:29:14.259 "unmap": false, 00:29:14.259 "flush": true, 00:29:14.259 "reset": true, 00:29:14.259 "nvme_admin": true, 00:29:14.259 "nvme_io": true, 00:29:14.259 "nvme_io_md": false, 00:29:14.259 "write_zeroes": true, 00:29:14.259 "zcopy": false, 00:29:14.259 "get_zone_info": false, 00:29:14.259 "zone_management": false, 00:29:14.259 "zone_append": false, 00:29:14.259 "compare": true, 00:29:14.259 "compare_and_write": true, 00:29:14.259 "abort": true, 00:29:14.259 "seek_hole": false, 00:29:14.259 "seek_data": false, 00:29:14.259 "copy": true, 00:29:14.259 "nvme_iov_md": false 00:29:14.259 }, 00:29:14.259 "memory_domains": [ 00:29:14.259 { 00:29:14.259 "dma_device_id": "system", 00:29:14.259 "dma_device_type": 1 00:29:14.259 } 00:29:14.259 ], 00:29:14.259 "driver_specific": { 00:29:14.259 "nvme": [ 00:29:14.259 { 00:29:14.259 "trid": { 00:29:14.259 "trtype": "TCP", 00:29:14.259 "adrfam": "IPv4", 00:29:14.259 "traddr": "10.0.0.2", 00:29:14.259 "trsvcid": "4420", 00:29:14.259 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:14.259 }, 00:29:14.259 "ctrlr_data": { 00:29:14.259 "cntlid": 1, 00:29:14.259 "vendor_id": "0x8086", 00:29:14.259 "model_number": "SPDK bdev Controller", 00:29:14.259 "serial_number": "00000000000000000000", 00:29:14.259 "firmware_revision": "24.09.1", 00:29:14.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.259 "oacs": { 00:29:14.259 "security": 0, 00:29:14.259 "format": 0, 00:29:14.259 "firmware": 0, 00:29:14.259 "ns_manage": 0 00:29:14.259 }, 00:29:14.518 "multi_ctrlr": true, 00:29:14.519 "ana_reporting": false 00:29:14.519 }, 00:29:14.519 "vs": { 00:29:14.519 "nvme_version": "1.3" 00:29:14.519 }, 00:29:14.519 "ns_data": { 00:29:14.519 "id": 1, 00:29:14.519 "can_share": true 00:29:14.519 } 00:29:14.519 } 00:29:14.519 ], 00:29:14.519 "mp_policy": "active_passive" 00:29:14.519 } 00:29:14.519 } 00:29:14.519 ] 00:29:14.519 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.519 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:14.519 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.519 10:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.519 [2024-12-07 10:03:42.988805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:14.519 [2024-12-07 10:03:42.988875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146c9d0 (9): Bad file descriptor 00:29:14.519 [2024-12-07 10:03:43.121030] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.519 [ 00:29:14.519 { 00:29:14.519 "name": "nvme0n1", 00:29:14.519 "aliases": [ 00:29:14.519 "c9dca51c-36ab-4ae0-bfff-39007f578a0f" 00:29:14.519 ], 00:29:14.519 "product_name": "NVMe disk", 00:29:14.519 "block_size": 512, 00:29:14.519 "num_blocks": 2097152, 00:29:14.519 "uuid": "c9dca51c-36ab-4ae0-bfff-39007f578a0f", 00:29:14.519 "numa_id": 1, 00:29:14.519 "assigned_rate_limits": { 00:29:14.519 "rw_ios_per_sec": 0, 00:29:14.519 "rw_mbytes_per_sec": 0, 00:29:14.519 "r_mbytes_per_sec": 0, 00:29:14.519 "w_mbytes_per_sec": 0 00:29:14.519 }, 00:29:14.519 "claimed": false, 00:29:14.519 "zoned": false, 00:29:14.519 "supported_io_types": { 00:29:14.519 "read": true, 00:29:14.519 "write": true, 00:29:14.519 "unmap": false, 00:29:14.519 "flush": true, 00:29:14.519 "reset": true, 00:29:14.519 "nvme_admin": true, 00:29:14.519 "nvme_io": true, 00:29:14.519 "nvme_io_md": false, 00:29:14.519 "write_zeroes": true, 00:29:14.519 "zcopy": false, 00:29:14.519 "get_zone_info": false, 00:29:14.519 "zone_management": false, 00:29:14.519 "zone_append": false, 00:29:14.519 "compare": true, 00:29:14.519 "compare_and_write": true, 00:29:14.519 "abort": true, 00:29:14.519 "seek_hole": false, 00:29:14.519 "seek_data": false, 00:29:14.519 "copy": true, 00:29:14.519 "nvme_iov_md": false 00:29:14.519 }, 00:29:14.519 "memory_domains": [ 00:29:14.519 { 00:29:14.519 "dma_device_id": "system", 00:29:14.519 "dma_device_type": 1 00:29:14.519 } 00:29:14.519 ], 00:29:14.519 "driver_specific": { 00:29:14.519 "nvme": [ 00:29:14.519 { 00:29:14.519 "trid": { 00:29:14.519 "trtype": "TCP", 00:29:14.519 "adrfam": "IPv4", 00:29:14.519 "traddr": "10.0.0.2", 00:29:14.519 "trsvcid": "4420", 00:29:14.519 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:14.519 }, 00:29:14.519 "ctrlr_data": { 00:29:14.519 "cntlid": 2, 00:29:14.519 "vendor_id": "0x8086", 00:29:14.519 "model_number": "SPDK bdev Controller", 00:29:14.519 "serial_number": "00000000000000000000", 00:29:14.519 "firmware_revision": "24.09.1", 00:29:14.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.519 "oacs": { 00:29:14.519 "security": 0, 00:29:14.519 "format": 0, 00:29:14.519 "firmware": 0, 00:29:14.519 "ns_manage": 0 00:29:14.519 }, 00:29:14.519 "multi_ctrlr": true, 00:29:14.519 "ana_reporting": false 00:29:14.519 }, 00:29:14.519 "vs": { 00:29:14.519 "nvme_version": "1.3" 00:29:14.519 }, 00:29:14.519 "ns_data": { 00:29:14.519 "id": 1, 00:29:14.519 "can_share": true 00:29:14.519 } 00:29:14.519 } 00:29:14.519 ], 00:29:14.519 "mp_policy": "active_passive" 00:29:14.519 } 00:29:14.519 } 00:29:14.519 ] 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.iILUIy2ufI 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.iILUIy2ufI 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.iILUIy2ufI 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.519 [2024-12-07 10:03:43.185410] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:14.519 [2024-12-07 10:03:43.185499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.519 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.519 [2024-12-07 10:03:43.201468] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:14.779 nvme0n1 00:29:14.779 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.779 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:14.779 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.779 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.779 [ 00:29:14.779 { 00:29:14.779 "name": "nvme0n1", 00:29:14.779 "aliases": [ 00:29:14.779 "c9dca51c-36ab-4ae0-bfff-39007f578a0f" 00:29:14.779 ], 00:29:14.779 "product_name": "NVMe disk", 00:29:14.779 "block_size": 512, 00:29:14.779 "num_blocks": 2097152, 00:29:14.779 "uuid": "c9dca51c-36ab-4ae0-bfff-39007f578a0f", 00:29:14.779 "numa_id": 1, 00:29:14.779 "assigned_rate_limits": { 00:29:14.779 "rw_ios_per_sec": 0, 00:29:14.779 "rw_mbytes_per_sec": 0, 00:29:14.779 "r_mbytes_per_sec": 0, 00:29:14.779 "w_mbytes_per_sec": 0 00:29:14.779 }, 00:29:14.779 "claimed": false, 00:29:14.779 "zoned": false, 00:29:14.779 "supported_io_types": { 00:29:14.779 "read": true, 00:29:14.779 "write": true, 00:29:14.779 "unmap": false, 00:29:14.779 "flush": true, 00:29:14.779 "reset": true, 00:29:14.779 "nvme_admin": true, 00:29:14.779 "nvme_io": true, 00:29:14.779 "nvme_io_md": false, 00:29:14.779 "write_zeroes": true, 00:29:14.779 "zcopy": false, 00:29:14.779 "get_zone_info": false, 00:29:14.779 "zone_management": false, 00:29:14.779 "zone_append": false, 00:29:14.779 "compare": true, 00:29:14.779 "compare_and_write": true, 00:29:14.779 "abort": true, 00:29:14.779 "seek_hole": false, 00:29:14.779 "seek_data": false, 00:29:14.779 "copy": true, 00:29:14.779 "nvme_iov_md": false 00:29:14.779 }, 00:29:14.779 "memory_domains": [ 00:29:14.779 { 00:29:14.779 "dma_device_id": "system", 00:29:14.779 "dma_device_type": 1 00:29:14.779 } 00:29:14.779 ], 00:29:14.779 "driver_specific": { 00:29:14.779 "nvme": [ 00:29:14.779 { 00:29:14.779 "trid": { 00:29:14.779 "trtype": "TCP", 00:29:14.779 "adrfam": "IPv4", 00:29:14.779 "traddr": "10.0.0.2", 00:29:14.779 "trsvcid": "4421", 00:29:14.779 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:14.779 }, 00:29:14.779 "ctrlr_data": { 00:29:14.779 "cntlid": 3, 00:29:14.779 "vendor_id": "0x8086", 00:29:14.779 "model_number": "SPDK bdev Controller", 00:29:14.779 "serial_number": "00000000000000000000", 00:29:14.779 "firmware_revision": "24.09.1", 00:29:14.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.779 "oacs": { 00:29:14.779 "security": 0, 00:29:14.779 "format": 0, 00:29:14.779 "firmware": 0, 00:29:14.779 "ns_manage": 0 00:29:14.779 }, 00:29:14.779 "multi_ctrlr": true, 00:29:14.779 "ana_reporting": false 00:29:14.779 }, 00:29:14.779 "vs": { 00:29:14.779 "nvme_version": "1.3" 00:29:14.779 }, 00:29:14.779 "ns_data": { 00:29:14.779 "id": 1, 00:29:14.779 "can_share": true 00:29:14.779 } 00:29:14.779 } 00:29:14.779 ], 00:29:14.779 "mp_policy": "active_passive" 00:29:14.779 } 00:29:14.779 } 00:29:14.779 ] 00:29:14.779 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.779 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.779 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.779 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.779 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.iILUIy2ufI 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.780 rmmod nvme_tcp 00:29:14.780 rmmod nvme_fabrics 00:29:14.780 rmmod nvme_keyring 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 1376151 ']' 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 1376151 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1376151 ']' 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1376151 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1376151 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1376151' 00:29:14.780 killing process with pid 1376151 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1376151 00:29:14.780 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1376151 00:29:15.039 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:15.039 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:15.039 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:15.039 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:15.039 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:29:15.039 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:15.039 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:29:15.039 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.039 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.039 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.039 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.039 10:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.945 10:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.204 00:29:17.204 real 0m8.959s 00:29:17.204 user 0m2.844s 00:29:17.204 sys 0m4.534s 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.204 ************************************ 00:29:17.204 END TEST nvmf_async_init 00:29:17.204 ************************************ 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.204 ************************************ 00:29:17.204 START TEST dma 00:29:17.204 ************************************ 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:17.204 * Looking for test storage... 00:29:17.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:17.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.204 --rc genhtml_branch_coverage=1 00:29:17.204 --rc genhtml_function_coverage=1 00:29:17.204 --rc genhtml_legend=1 00:29:17.204 --rc geninfo_all_blocks=1 00:29:17.204 --rc geninfo_unexecuted_blocks=1 00:29:17.204 00:29:17.204 ' 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:17.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.204 --rc genhtml_branch_coverage=1 00:29:17.204 --rc genhtml_function_coverage=1 00:29:17.204 --rc genhtml_legend=1 00:29:17.204 --rc geninfo_all_blocks=1 00:29:17.204 --rc geninfo_unexecuted_blocks=1 00:29:17.204 00:29:17.204 ' 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:17.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.204 --rc genhtml_branch_coverage=1 00:29:17.204 --rc genhtml_function_coverage=1 00:29:17.204 --rc genhtml_legend=1 00:29:17.204 --rc geninfo_all_blocks=1 00:29:17.204 --rc geninfo_unexecuted_blocks=1 00:29:17.204 00:29:17.204 ' 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:17.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.204 --rc genhtml_branch_coverage=1 00:29:17.204 --rc genhtml_function_coverage=1 00:29:17.204 --rc genhtml_legend=1 00:29:17.204 --rc geninfo_all_blocks=1 00:29:17.204 --rc geninfo_unexecuted_blocks=1 00:29:17.204 00:29:17.204 ' 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.204 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:17.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:17.205 00:29:17.205 real 0m0.168s 00:29:17.205 user 0m0.095s 00:29:17.205 sys 0m0.078s 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:17.205 10:03:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:17.205 ************************************ 00:29:17.205 END TEST dma 00:29:17.205 ************************************ 00:29:17.464 10:03:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:17.464 10:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:17.464 10:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:17.464 10:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.464 ************************************ 00:29:17.464 START TEST nvmf_identify 00:29:17.464 ************************************ 00:29:17.464 10:03:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:17.464 * Looking for test storage... 00:29:17.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.464 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:17.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.465 --rc genhtml_branch_coverage=1 00:29:17.465 --rc genhtml_function_coverage=1 00:29:17.465 --rc genhtml_legend=1 00:29:17.465 --rc geninfo_all_blocks=1 00:29:17.465 --rc geninfo_unexecuted_blocks=1 00:29:17.465 00:29:17.465 ' 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:17.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.465 --rc genhtml_branch_coverage=1 00:29:17.465 --rc genhtml_function_coverage=1 00:29:17.465 --rc genhtml_legend=1 00:29:17.465 --rc geninfo_all_blocks=1 00:29:17.465 --rc geninfo_unexecuted_blocks=1 00:29:17.465 00:29:17.465 ' 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:17.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.465 --rc genhtml_branch_coverage=1 00:29:17.465 --rc genhtml_function_coverage=1 00:29:17.465 --rc genhtml_legend=1 00:29:17.465 --rc geninfo_all_blocks=1 00:29:17.465 --rc geninfo_unexecuted_blocks=1 00:29:17.465 00:29:17.465 ' 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:17.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.465 --rc genhtml_branch_coverage=1 00:29:17.465 --rc genhtml_function_coverage=1 00:29:17.465 --rc genhtml_legend=1 00:29:17.465 --rc geninfo_all_blocks=1 00:29:17.465 --rc geninfo_unexecuted_blocks=1 00:29:17.465 00:29:17.465 ' 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:17.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.465 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:24.037 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.037 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:24.038 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:24.038 Found net devices under 0000:86:00.0: cvl_0_0 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:24.038 Found net devices under 0000:86:00.1: cvl_0_1 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:29:24.038 00:29:24.038 --- 10.0.0.2 ping statistics --- 00:29:24.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.038 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:29:24.038 00:29:24.038 --- 10.0.0.1 ping statistics --- 00:29:24.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.038 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.038 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1379753 00:29:24.039 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:24.039 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:24.039 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1379753 00:29:24.039 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1379753 ']' 00:29:24.039 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.039 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.039 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.039 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.039 10:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.039 [2024-12-07 10:03:51.881966] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:24.039 [2024-12-07 10:03:51.882015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.039 [2024-12-07 10:03:51.941524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.039 [2024-12-07 10:03:51.984923] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.039 [2024-12-07 10:03:51.984967] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.039 [2024-12-07 10:03:51.984974] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.039 [2024-12-07 10:03:51.984980] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.039 [2024-12-07 10:03:51.984985] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.039 [2024-12-07 10:03:51.985038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.039 [2024-12-07 10:03:51.985132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.039 [2024-12-07 10:03:51.985159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.039 [2024-12-07 10:03:51.985158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.039 [2024-12-07 10:03:52.099073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.039 Malloc0 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.039 [2024-12-07 10:03:52.182945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.039 [ 00:29:24.039 { 00:29:24.039 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:24.039 "subtype": "Discovery", 00:29:24.039 "listen_addresses": [ 00:29:24.039 { 00:29:24.039 "trtype": "TCP", 00:29:24.039 "adrfam": "IPv4", 00:29:24.039 "traddr": "10.0.0.2", 00:29:24.039 "trsvcid": "4420" 00:29:24.039 } 00:29:24.039 ], 00:29:24.039 "allow_any_host": true, 00:29:24.039 "hosts": [] 00:29:24.039 }, 00:29:24.039 { 00:29:24.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.039 "subtype": "NVMe", 00:29:24.039 "listen_addresses": [ 00:29:24.039 { 00:29:24.039 "trtype": "TCP", 00:29:24.039 "adrfam": "IPv4", 00:29:24.039 "traddr": "10.0.0.2", 00:29:24.039 "trsvcid": "4420" 00:29:24.039 } 00:29:24.039 ], 00:29:24.039 "allow_any_host": true, 00:29:24.039 "hosts": [], 00:29:24.039 "serial_number": "SPDK00000000000001", 00:29:24.039 "model_number": "SPDK bdev Controller", 00:29:24.039 "max_namespaces": 32, 00:29:24.039 "min_cntlid": 1, 00:29:24.039 "max_cntlid": 65519, 00:29:24.039 "namespaces": [ 00:29:24.039 { 00:29:24.039 "nsid": 1, 00:29:24.039 "bdev_name": "Malloc0", 00:29:24.039 "name": "Malloc0", 00:29:24.039 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:24.039 "eui64": "ABCDEF0123456789", 00:29:24.039 "uuid": "b1b3fbf7-7c81-4580-aceb-bd576566f4a1" 00:29:24.039 } 00:29:24.039 ] 00:29:24.039 } 00:29:24.039 ] 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.039 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:24.039 [2024-12-07 10:03:52.234053] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:24.040 [2024-12-07 10:03:52.234087] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379898 ] 00:29:24.040 [2024-12-07 10:03:52.259675] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:24.040 [2024-12-07 10:03:52.259726] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:24.040 [2024-12-07 10:03:52.259731] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:24.040 [2024-12-07 10:03:52.259742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:24.040 [2024-12-07 10:03:52.259751] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:24.040 [2024-12-07 10:03:52.263259] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:24.040 [2024-12-07 10:03:52.263295] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1104ad0 0 00:29:24.040 [2024-12-07 10:03:52.269963] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:24.040 [2024-12-07 10:03:52.269979] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:24.040 [2024-12-07 10:03:52.269983] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:24.040 [2024-12-07 10:03:52.269987] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:24.040 [2024-12-07 10:03:52.270018] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.270024] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.270027] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1104ad0) 00:29:24.040 [2024-12-07 10:03:52.270041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:24.040 [2024-12-07 10:03:52.270058] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a300, cid 0, qid 0 00:29:24.040 [2024-12-07 10:03:52.276958] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.040 [2024-12-07 10:03:52.276967] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.040 [2024-12-07 10:03:52.276970] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.276974] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a300) on tqpair=0x1104ad0 00:29:24.040 [2024-12-07 10:03:52.276984] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:24.040 [2024-12-07 10:03:52.276991] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:24.040 [2024-12-07 10:03:52.276996] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:24.040 [2024-12-07 10:03:52.277010] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277014] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277017] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1104ad0) 00:29:24.040 [2024-12-07 10:03:52.277024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.040 [2024-12-07 10:03:52.277040] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a300, cid 0, qid 0 00:29:24.040 [2024-12-07 10:03:52.277134] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.040 [2024-12-07 10:03:52.277140] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.040 [2024-12-07 10:03:52.277143] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277146] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a300) on tqpair=0x1104ad0 00:29:24.040 [2024-12-07 10:03:52.277151] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:24.040 [2024-12-07 10:03:52.277158] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:24.040 [2024-12-07 10:03:52.277164] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277168] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277171] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1104ad0) 00:29:24.040 [2024-12-07 10:03:52.277177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.040 [2024-12-07 10:03:52.277188] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a300, cid 0, qid 0 00:29:24.040 [2024-12-07 10:03:52.277255] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.040 [2024-12-07 10:03:52.277261] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.040 [2024-12-07 10:03:52.277264] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277268] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a300) on tqpair=0x1104ad0 00:29:24.040 [2024-12-07 10:03:52.277273] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:24.040 [2024-12-07 10:03:52.277280] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:24.040 [2024-12-07 10:03:52.277286] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277290] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277293] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1104ad0) 00:29:24.040 [2024-12-07 10:03:52.277299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.040 [2024-12-07 10:03:52.277308] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a300, cid 0, qid 0 00:29:24.040 [2024-12-07 10:03:52.277375] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.040 [2024-12-07 10:03:52.277381] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.040 [2024-12-07 10:03:52.277385] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277388] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a300) on tqpair=0x1104ad0 00:29:24.040 [2024-12-07 10:03:52.277393] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:24.040 [2024-12-07 10:03:52.277401] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277404] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277408] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1104ad0) 00:29:24.040 [2024-12-07 10:03:52.277413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.040 [2024-12-07 10:03:52.277423] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a300, cid 0, qid 0 00:29:24.040 [2024-12-07 10:03:52.277494] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.040 [2024-12-07 10:03:52.277500] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.040 [2024-12-07 10:03:52.277503] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277506] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a300) on tqpair=0x1104ad0 00:29:24.040 [2024-12-07 10:03:52.277511] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:24.040 [2024-12-07 10:03:52.277515] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:24.040 [2024-12-07 10:03:52.277521] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:24.040 [2024-12-07 10:03:52.277627] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:24.040 [2024-12-07 10:03:52.277631] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:24.040 [2024-12-07 10:03:52.277639] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277642] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.040 [2024-12-07 10:03:52.277645] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1104ad0) 00:29:24.041 [2024-12-07 10:03:52.277651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.041 [2024-12-07 10:03:52.277660] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a300, cid 0, qid 0 00:29:24.041 [2024-12-07 10:03:52.277746] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.041 [2024-12-07 10:03:52.277752] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.041 [2024-12-07 10:03:52.277755] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.277758] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a300) on tqpair=0x1104ad0 00:29:24.041 [2024-12-07 10:03:52.277763] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:24.041 [2024-12-07 10:03:52.277771] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.277774] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.277778] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1104ad0) 00:29:24.041 [2024-12-07 10:03:52.277783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.041 [2024-12-07 10:03:52.277793] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a300, cid 0, qid 0 00:29:24.041 [2024-12-07 10:03:52.277863] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.041 [2024-12-07 10:03:52.277868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.041 [2024-12-07 10:03:52.277872] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.277875] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a300) on tqpair=0x1104ad0 00:29:24.041 [2024-12-07 10:03:52.277879] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:24.041 [2024-12-07 10:03:52.277883] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:24.041 [2024-12-07 10:03:52.277890] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:24.041 [2024-12-07 10:03:52.277902] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:24.041 [2024-12-07 10:03:52.277911] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.277915] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1104ad0) 00:29:24.041 [2024-12-07 10:03:52.277921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.041 [2024-12-07 10:03:52.277930] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a300, cid 0, qid 0 00:29:24.041 [2024-12-07 10:03:52.278025] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.041 [2024-12-07 10:03:52.278032] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.041 [2024-12-07 10:03:52.278035] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278038] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1104ad0): datao=0, datal=4096, cccid=0 00:29:24.041 [2024-12-07 10:03:52.278043] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x115a300) on tqpair(0x1104ad0): expected_datao=0, payload_size=4096 00:29:24.041 [2024-12-07 10:03:52.278048] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278070] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278074] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278115] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.041 [2024-12-07 10:03:52.278121] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.041 [2024-12-07 10:03:52.278124] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278127] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a300) on tqpair=0x1104ad0 00:29:24.041 [2024-12-07 10:03:52.278135] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:24.041 [2024-12-07 10:03:52.278139] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:24.041 [2024-12-07 10:03:52.278143] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:24.041 [2024-12-07 10:03:52.278147] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:24.041 [2024-12-07 10:03:52.278152] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:24.041 [2024-12-07 10:03:52.278156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:24.041 [2024-12-07 10:03:52.278165] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:24.041 [2024-12-07 10:03:52.278171] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278175] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278178] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1104ad0) 00:29:24.041 [2024-12-07 10:03:52.278184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:24.041 [2024-12-07 10:03:52.278194] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a300, cid 0, qid 0 00:29:24.041 [2024-12-07 10:03:52.278262] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.041 [2024-12-07 10:03:52.278268] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.041 [2024-12-07 10:03:52.278272] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278275] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a300) on tqpair=0x1104ad0 00:29:24.041 [2024-12-07 10:03:52.278282] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278285] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278291] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1104ad0) 00:29:24.041 [2024-12-07 10:03:52.278296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.041 [2024-12-07 10:03:52.278302] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278306] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278309] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1104ad0) 00:29:24.041 [2024-12-07 10:03:52.278314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.041 [2024-12-07 10:03:52.278319] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278322] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278326] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1104ad0) 00:29:24.041 [2024-12-07 10:03:52.278330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.041 [2024-12-07 10:03:52.278336] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278339] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.041 [2024-12-07 10:03:52.278342] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1104ad0) 00:29:24.041 [2024-12-07 10:03:52.278347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.041 [2024-12-07 10:03:52.278351] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:24.041 [2024-12-07 10:03:52.278362] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:24.042 [2024-12-07 10:03:52.278368] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278371] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1104ad0) 00:29:24.042 [2024-12-07 10:03:52.278377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.042 [2024-12-07 10:03:52.278388] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a300, cid 0, qid 0 00:29:24.042 [2024-12-07 10:03:52.278393] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a480, cid 1, qid 0 00:29:24.042 [2024-12-07 10:03:52.278397] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a600, cid 2, qid 0 00:29:24.042 [2024-12-07 10:03:52.278401] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a780, cid 3, qid 0 00:29:24.042 [2024-12-07 10:03:52.278405] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a900, cid 4, qid 0 00:29:24.042 [2024-12-07 10:03:52.278505] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.042 [2024-12-07 10:03:52.278511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.042 [2024-12-07 10:03:52.278514] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278517] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a900) on tqpair=0x1104ad0 00:29:24.042 [2024-12-07 10:03:52.278522] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:24.042 [2024-12-07 10:03:52.278526] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:24.042 [2024-12-07 10:03:52.278535] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1104ad0) 00:29:24.042 [2024-12-07 10:03:52.278545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.042 [2024-12-07 10:03:52.278557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a900, cid 4, qid 0 00:29:24.042 [2024-12-07 10:03:52.278636] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.042 [2024-12-07 10:03:52.278642] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.042 [2024-12-07 10:03:52.278645] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278648] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1104ad0): datao=0, datal=4096, cccid=4 00:29:24.042 [2024-12-07 10:03:52.278652] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x115a900) on tqpair(0x1104ad0): expected_datao=0, payload_size=4096 00:29:24.042 [2024-12-07 10:03:52.278656] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278661] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278665] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278695] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.042 [2024-12-07 10:03:52.278700] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.042 [2024-12-07 10:03:52.278703] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278706] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a900) on tqpair=0x1104ad0 00:29:24.042 [2024-12-07 10:03:52.278719] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:24.042 [2024-12-07 10:03:52.278745] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278749] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1104ad0) 00:29:24.042 [2024-12-07 10:03:52.278755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.042 [2024-12-07 10:03:52.278761] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278764] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278768] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1104ad0) 00:29:24.042 [2024-12-07 10:03:52.278773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.042 [2024-12-07 10:03:52.278784] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a900, cid 4, qid 0 00:29:24.042 [2024-12-07 10:03:52.278789] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115aa80, cid 5, qid 0 00:29:24.042 [2024-12-07 10:03:52.278890] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.042 [2024-12-07 10:03:52.278896] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.042 [2024-12-07 10:03:52.278899] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278902] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1104ad0): datao=0, datal=1024, cccid=4 00:29:24.042 [2024-12-07 10:03:52.278906] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x115a900) on tqpair(0x1104ad0): expected_datao=0, payload_size=1024 00:29:24.042 [2024-12-07 10:03:52.278910] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278915] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278919] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278924] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.042 [2024-12-07 10:03:52.278929] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.042 [2024-12-07 10:03:52.278931] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.278935] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115aa80) on tqpair=0x1104ad0 00:29:24.042 [2024-12-07 10:03:52.321958] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.042 [2024-12-07 10:03:52.321969] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.042 [2024-12-07 10:03:52.321973] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.321977] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a900) on tqpair=0x1104ad0 00:29:24.042 [2024-12-07 10:03:52.321988] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.321992] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1104ad0) 00:29:24.042 [2024-12-07 10:03:52.321999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.042 [2024-12-07 10:03:52.322015] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a900, cid 4, qid 0 00:29:24.042 [2024-12-07 10:03:52.322104] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.042 [2024-12-07 10:03:52.322110] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.042 [2024-12-07 10:03:52.322113] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.322116] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1104ad0): datao=0, datal=3072, cccid=4 00:29:24.042 [2024-12-07 10:03:52.322120] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x115a900) on tqpair(0x1104ad0): expected_datao=0, payload_size=3072 00:29:24.042 [2024-12-07 10:03:52.322124] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.322145] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.322149] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.322193] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.042 [2024-12-07 10:03:52.322198] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.042 [2024-12-07 10:03:52.322202] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.322205] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a900) on tqpair=0x1104ad0 00:29:24.042 [2024-12-07 10:03:52.322212] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.042 [2024-12-07 10:03:52.322215] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1104ad0) 00:29:24.042 [2024-12-07 10:03:52.322221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.042 [2024-12-07 10:03:52.322234] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a900, cid 4, qid 0 00:29:24.043 [2024-12-07 10:03:52.322309] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.043 [2024-12-07 10:03:52.322315] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.043 [2024-12-07 10:03:52.322318] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.043 [2024-12-07 10:03:52.322321] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1104ad0): datao=0, datal=8, cccid=4 00:29:24.043 [2024-12-07 10:03:52.322325] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x115a900) on tqpair(0x1104ad0): expected_datao=0, payload_size=8 00:29:24.043 [2024-12-07 10:03:52.322329] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.043 [2024-12-07 10:03:52.322334] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.043 [2024-12-07 10:03:52.322338] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.043 [2024-12-07 10:03:52.364021] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.043 [2024-12-07 10:03:52.364032] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.043 [2024-12-07 10:03:52.364035] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.043 [2024-12-07 10:03:52.364039] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a900) on tqpair=0x1104ad0 00:29:24.043 ===================================================== 00:29:24.043 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:24.043 ===================================================== 00:29:24.043 Controller Capabilities/Features 00:29:24.043 ================================ 00:29:24.043 Vendor ID: 0000 00:29:24.043 Subsystem Vendor ID: 0000 00:29:24.043 Serial Number: .................... 00:29:24.043 Model Number: ........................................ 00:29:24.043 Firmware Version: 24.09.1 00:29:24.043 Recommended Arb Burst: 0 00:29:24.043 IEEE OUI Identifier: 00 00 00 00:29:24.043 Multi-path I/O 00:29:24.043 May have multiple subsystem ports: No 00:29:24.043 May have multiple controllers: No 00:29:24.043 Associated with SR-IOV VF: No 00:29:24.043 Max Data Transfer Size: 131072 00:29:24.043 Max Number of Namespaces: 0 00:29:24.043 Max Number of I/O Queues: 1024 00:29:24.043 NVMe Specification Version (VS): 1.3 00:29:24.043 NVMe Specification Version (Identify): 1.3 00:29:24.043 Maximum Queue Entries: 128 00:29:24.043 Contiguous Queues Required: Yes 00:29:24.043 Arbitration Mechanisms Supported 00:29:24.043 Weighted Round Robin: Not Supported 00:29:24.043 Vendor Specific: Not Supported 00:29:24.043 Reset Timeout: 15000 ms 00:29:24.043 Doorbell Stride: 4 bytes 00:29:24.043 NVM Subsystem Reset: Not Supported 00:29:24.043 Command Sets Supported 00:29:24.043 NVM Command Set: Supported 00:29:24.043 Boot Partition: Not Supported 00:29:24.043 Memory Page Size Minimum: 4096 bytes 00:29:24.043 Memory Page Size Maximum: 4096 bytes 00:29:24.043 Persistent Memory Region: Not Supported 00:29:24.043 Optional Asynchronous Events Supported 00:29:24.043 Namespace Attribute Notices: Not Supported 00:29:24.043 Firmware Activation Notices: Not Supported 00:29:24.043 ANA Change Notices: Not Supported 00:29:24.043 PLE Aggregate Log Change Notices: Not Supported 00:29:24.043 LBA Status Info Alert Notices: Not Supported 00:29:24.043 EGE Aggregate Log Change Notices: Not Supported 00:29:24.043 Normal NVM Subsystem Shutdown event: Not Supported 00:29:24.043 Zone Descriptor Change Notices: Not Supported 00:29:24.043 Discovery Log Change Notices: Supported 00:29:24.043 Controller Attributes 00:29:24.043 128-bit Host Identifier: Not Supported 00:29:24.043 Non-Operational Permissive Mode: Not Supported 00:29:24.043 NVM Sets: Not Supported 00:29:24.043 Read Recovery Levels: Not Supported 00:29:24.043 Endurance Groups: Not Supported 00:29:24.043 Predictable Latency Mode: Not Supported 00:29:24.043 Traffic Based Keep ALive: Not Supported 00:29:24.043 Namespace Granularity: Not Supported 00:29:24.043 SQ Associations: Not Supported 00:29:24.043 UUID List: Not Supported 00:29:24.043 Multi-Domain Subsystem: Not Supported 00:29:24.043 Fixed Capacity Management: Not Supported 00:29:24.043 Variable Capacity Management: Not Supported 00:29:24.043 Delete Endurance Group: Not Supported 00:29:24.043 Delete NVM Set: Not Supported 00:29:24.043 Extended LBA Formats Supported: Not Supported 00:29:24.043 Flexible Data Placement Supported: Not Supported 00:29:24.043 00:29:24.043 Controller Memory Buffer Support 00:29:24.043 ================================ 00:29:24.043 Supported: No 00:29:24.043 00:29:24.043 Persistent Memory Region Support 00:29:24.043 ================================ 00:29:24.043 Supported: No 00:29:24.043 00:29:24.043 Admin Command Set Attributes 00:29:24.043 ============================ 00:29:24.043 Security Send/Receive: Not Supported 00:29:24.043 Format NVM: Not Supported 00:29:24.043 Firmware Activate/Download: Not Supported 00:29:24.043 Namespace Management: Not Supported 00:29:24.043 Device Self-Test: Not Supported 00:29:24.043 Directives: Not Supported 00:29:24.043 NVMe-MI: Not Supported 00:29:24.043 Virtualization Management: Not Supported 00:29:24.043 Doorbell Buffer Config: Not Supported 00:29:24.043 Get LBA Status Capability: Not Supported 00:29:24.043 Command & Feature Lockdown Capability: Not Supported 00:29:24.043 Abort Command Limit: 1 00:29:24.043 Async Event Request Limit: 4 00:29:24.043 Number of Firmware Slots: N/A 00:29:24.044 Firmware Slot 1 Read-Only: N/A 00:29:24.044 Firmware Activation Without Reset: N/A 00:29:24.044 Multiple Update Detection Support: N/A 00:29:24.044 Firmware Update Granularity: No Information Provided 00:29:24.044 Per-Namespace SMART Log: No 00:29:24.044 Asymmetric Namespace Access Log Page: Not Supported 00:29:24.044 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:24.044 Command Effects Log Page: Not Supported 00:29:24.044 Get Log Page Extended Data: Supported 00:29:24.044 Telemetry Log Pages: Not Supported 00:29:24.044 Persistent Event Log Pages: Not Supported 00:29:24.044 Supported Log Pages Log Page: May Support 00:29:24.044 Commands Supported & Effects Log Page: Not Supported 00:29:24.044 Feature Identifiers & Effects Log Page:May Support 00:29:24.044 NVMe-MI Commands & Effects Log Page: May Support 00:29:24.044 Data Area 4 for Telemetry Log: Not Supported 00:29:24.044 Error Log Page Entries Supported: 128 00:29:24.044 Keep Alive: Not Supported 00:29:24.044 00:29:24.044 NVM Command Set Attributes 00:29:24.044 ========================== 00:29:24.044 Submission Queue Entry Size 00:29:24.044 Max: 1 00:29:24.044 Min: 1 00:29:24.044 Completion Queue Entry Size 00:29:24.044 Max: 1 00:29:24.044 Min: 1 00:29:24.044 Number of Namespaces: 0 00:29:24.044 Compare Command: Not Supported 00:29:24.044 Write Uncorrectable Command: Not Supported 00:29:24.044 Dataset Management Command: Not Supported 00:29:24.044 Write Zeroes Command: Not Supported 00:29:24.044 Set Features Save Field: Not Supported 00:29:24.044 Reservations: Not Supported 00:29:24.044 Timestamp: Not Supported 00:29:24.044 Copy: Not Supported 00:29:24.044 Volatile Write Cache: Not Present 00:29:24.044 Atomic Write Unit (Normal): 1 00:29:24.044 Atomic Write Unit (PFail): 1 00:29:24.044 Atomic Compare & Write Unit: 1 00:29:24.044 Fused Compare & Write: Supported 00:29:24.044 Scatter-Gather List 00:29:24.044 SGL Command Set: Supported 00:29:24.044 SGL Keyed: Supported 00:29:24.044 SGL Bit Bucket Descriptor: Not Supported 00:29:24.044 SGL Metadata Pointer: Not Supported 00:29:24.044 Oversized SGL: Not Supported 00:29:24.044 SGL Metadata Address: Not Supported 00:29:24.044 SGL Offset: Supported 00:29:24.044 Transport SGL Data Block: Not Supported 00:29:24.044 Replay Protected Memory Block: Not Supported 00:29:24.044 00:29:24.044 Firmware Slot Information 00:29:24.044 ========================= 00:29:24.044 Active slot: 0 00:29:24.044 00:29:24.044 00:29:24.044 Error Log 00:29:24.044 ========= 00:29:24.044 00:29:24.044 Active Namespaces 00:29:24.044 ================= 00:29:24.044 Discovery Log Page 00:29:24.044 ================== 00:29:24.044 Generation Counter: 2 00:29:24.044 Number of Records: 2 00:29:24.044 Record Format: 0 00:29:24.044 00:29:24.044 Discovery Log Entry 0 00:29:24.044 ---------------------- 00:29:24.044 Transport Type: 3 (TCP) 00:29:24.044 Address Family: 1 (IPv4) 00:29:24.044 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:24.044 Entry Flags: 00:29:24.044 Duplicate Returned Information: 1 00:29:24.044 Explicit Persistent Connection Support for Discovery: 1 00:29:24.044 Transport Requirements: 00:29:24.044 Secure Channel: Not Required 00:29:24.044 Port ID: 0 (0x0000) 00:29:24.044 Controller ID: 65535 (0xffff) 00:29:24.044 Admin Max SQ Size: 128 00:29:24.044 Transport Service Identifier: 4420 00:29:24.044 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:24.044 Transport Address: 10.0.0.2 00:29:24.044 Discovery Log Entry 1 00:29:24.044 ---------------------- 00:29:24.044 Transport Type: 3 (TCP) 00:29:24.044 Address Family: 1 (IPv4) 00:29:24.044 Subsystem Type: 2 (NVM Subsystem) 00:29:24.044 Entry Flags: 00:29:24.044 Duplicate Returned Information: 0 00:29:24.044 Explicit Persistent Connection Support for Discovery: 0 00:29:24.044 Transport Requirements: 00:29:24.044 Secure Channel: Not Required 00:29:24.044 Port ID: 0 (0x0000) 00:29:24.044 Controller ID: 65535 (0xffff) 00:29:24.044 Admin Max SQ Size: 128 00:29:24.044 Transport Service Identifier: 4420 00:29:24.044 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:24.044 Transport Address: 10.0.0.2 [2024-12-07 10:03:52.364122] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:24.044 [2024-12-07 10:03:52.364134] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a300) on tqpair=0x1104ad0 00:29:24.044 [2024-12-07 10:03:52.364140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.044 [2024-12-07 10:03:52.364145] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a480) on tqpair=0x1104ad0 00:29:24.044 [2024-12-07 10:03:52.364149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.044 [2024-12-07 10:03:52.364154] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a600) on tqpair=0x1104ad0 00:29:24.044 [2024-12-07 10:03:52.364158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.044 [2024-12-07 10:03:52.364162] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a780) on tqpair=0x1104ad0 00:29:24.044 [2024-12-07 10:03:52.364166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.044 [2024-12-07 10:03:52.364174] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.044 [2024-12-07 10:03:52.364178] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.044 [2024-12-07 10:03:52.364181] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1104ad0) 00:29:24.044 [2024-12-07 10:03:52.364188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.044 [2024-12-07 10:03:52.364201] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a780, cid 3, qid 0 00:29:24.044 [2024-12-07 10:03:52.364267] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.044 [2024-12-07 10:03:52.364273] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.044 [2024-12-07 10:03:52.364276] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.044 [2024-12-07 10:03:52.364279] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a780) on tqpair=0x1104ad0 00:29:24.044 [2024-12-07 10:03:52.364285] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.044 [2024-12-07 10:03:52.364289] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.044 [2024-12-07 10:03:52.364292] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1104ad0) 00:29:24.044 [2024-12-07 10:03:52.364298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.044 [2024-12-07 10:03:52.364310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a780, cid 3, qid 0 00:29:24.044 [2024-12-07 10:03:52.364386] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.045 [2024-12-07 10:03:52.364391] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.045 [2024-12-07 10:03:52.364394] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364398] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a780) on tqpair=0x1104ad0 00:29:24.045 [2024-12-07 10:03:52.364402] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:24.045 [2024-12-07 10:03:52.364409] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:24.045 [2024-12-07 10:03:52.364418] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364421] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364424] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1104ad0) 00:29:24.045 [2024-12-07 10:03:52.364430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.045 [2024-12-07 10:03:52.364441] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a780, cid 3, qid 0 00:29:24.045 [2024-12-07 10:03:52.364510] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.045 [2024-12-07 10:03:52.364516] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.045 [2024-12-07 10:03:52.364520] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364523] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a780) on tqpair=0x1104ad0 00:29:24.045 [2024-12-07 10:03:52.364532] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364536] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1104ad0) 00:29:24.045 [2024-12-07 10:03:52.364545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.045 [2024-12-07 10:03:52.364554] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a780, cid 3, qid 0 00:29:24.045 [2024-12-07 10:03:52.364622] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.045 [2024-12-07 10:03:52.364628] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.045 [2024-12-07 10:03:52.364631] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364634] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a780) on tqpair=0x1104ad0 00:29:24.045 [2024-12-07 10:03:52.364642] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364646] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364649] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1104ad0) 00:29:24.045 [2024-12-07 10:03:52.364655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.045 [2024-12-07 10:03:52.364664] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a780, cid 3, qid 0 00:29:24.045 [2024-12-07 10:03:52.364733] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.045 [2024-12-07 10:03:52.364739] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.045 [2024-12-07 10:03:52.364742] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364745] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a780) on tqpair=0x1104ad0 00:29:24.045 [2024-12-07 10:03:52.364753] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364757] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364760] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1104ad0) 00:29:24.045 [2024-12-07 10:03:52.364766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.045 [2024-12-07 10:03:52.364775] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a780, cid 3, qid 0 00:29:24.045 [2024-12-07 10:03:52.364843] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.045 [2024-12-07 10:03:52.364849] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.045 [2024-12-07 10:03:52.364852] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364855] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a780) on tqpair=0x1104ad0 00:29:24.045 [2024-12-07 10:03:52.364864] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364867] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.364870] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1104ad0) 00:29:24.045 [2024-12-07 10:03:52.364876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.045 [2024-12-07 10:03:52.364885] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a780, cid 3, qid 0 00:29:24.045 [2024-12-07 10:03:52.368956] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.045 [2024-12-07 10:03:52.368965] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.045 [2024-12-07 10:03:52.368970] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.368974] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a780) on tqpair=0x1104ad0 00:29:24.045 [2024-12-07 10:03:52.368984] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.368988] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.368991] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1104ad0) 00:29:24.045 [2024-12-07 10:03:52.368997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.045 [2024-12-07 10:03:52.369009] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x115a780, cid 3, qid 0 00:29:24.045 [2024-12-07 10:03:52.369133] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.045 [2024-12-07 10:03:52.369139] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.045 [2024-12-07 10:03:52.369142] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.369145] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x115a780) on tqpair=0x1104ad0 00:29:24.045 [2024-12-07 10:03:52.369152] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:29:24.045 00:29:24.045 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:24.045 [2024-12-07 10:03:52.405698] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:24.045 [2024-12-07 10:03:52.405733] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379995 ] 00:29:24.045 [2024-12-07 10:03:52.431251] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:24.045 [2024-12-07 10:03:52.431288] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:24.045 [2024-12-07 10:03:52.431293] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:24.045 [2024-12-07 10:03:52.431304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:24.045 [2024-12-07 10:03:52.431311] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:24.045 [2024-12-07 10:03:52.435127] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:24.045 [2024-12-07 10:03:52.435153] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfe3ad0 0 00:29:24.045 [2024-12-07 10:03:52.442958] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:24.045 [2024-12-07 10:03:52.442972] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:24.045 [2024-12-07 10:03:52.442976] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:24.045 [2024-12-07 10:03:52.442979] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:24.045 [2024-12-07 10:03:52.443004] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.443009] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.045 [2024-12-07 10:03:52.443013] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ad0) 00:29:24.045 [2024-12-07 10:03:52.443023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:24.046 [2024-12-07 10:03:52.443043] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039300, cid 0, qid 0 00:29:24.046 [2024-12-07 10:03:52.450957] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.046 [2024-12-07 10:03:52.450965] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.046 [2024-12-07 10:03:52.450968] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.450972] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039300) on tqpair=0xfe3ad0 00:29:24.046 [2024-12-07 10:03:52.450980] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:24.046 [2024-12-07 10:03:52.450986] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:24.046 [2024-12-07 10:03:52.450991] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:24.046 [2024-12-07 10:03:52.451001] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451005] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451008] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ad0) 00:29:24.046 [2024-12-07 10:03:52.451016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.046 [2024-12-07 10:03:52.451029] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039300, cid 0, qid 0 00:29:24.046 [2024-12-07 10:03:52.451169] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.046 [2024-12-07 10:03:52.451175] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.046 [2024-12-07 10:03:52.451178] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451181] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039300) on tqpair=0xfe3ad0 00:29:24.046 [2024-12-07 10:03:52.451185] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:24.046 [2024-12-07 10:03:52.451192] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:24.046 [2024-12-07 10:03:52.451198] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451201] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451204] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ad0) 00:29:24.046 [2024-12-07 10:03:52.451210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.046 [2024-12-07 10:03:52.451220] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039300, cid 0, qid 0 00:29:24.046 [2024-12-07 10:03:52.451285] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.046 [2024-12-07 10:03:52.451291] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.046 [2024-12-07 10:03:52.451294] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451297] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039300) on tqpair=0xfe3ad0 00:29:24.046 [2024-12-07 10:03:52.451301] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:24.046 [2024-12-07 10:03:52.451308] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:24.046 [2024-12-07 10:03:52.451314] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451317] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451320] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ad0) 00:29:24.046 [2024-12-07 10:03:52.451326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.046 [2024-12-07 10:03:52.451336] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039300, cid 0, qid 0 00:29:24.046 [2024-12-07 10:03:52.451404] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.046 [2024-12-07 10:03:52.451410] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.046 [2024-12-07 10:03:52.451413] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451416] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039300) on tqpair=0xfe3ad0 00:29:24.046 [2024-12-07 10:03:52.451420] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:24.046 [2024-12-07 10:03:52.451428] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451432] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451435] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ad0) 00:29:24.046 [2024-12-07 10:03:52.451441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.046 [2024-12-07 10:03:52.451451] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039300, cid 0, qid 0 00:29:24.046 [2024-12-07 10:03:52.451517] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.046 [2024-12-07 10:03:52.451522] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.046 [2024-12-07 10:03:52.451526] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451529] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039300) on tqpair=0xfe3ad0 00:29:24.046 [2024-12-07 10:03:52.451533] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:24.046 [2024-12-07 10:03:52.451537] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:24.046 [2024-12-07 10:03:52.451543] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:24.046 [2024-12-07 10:03:52.451648] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:24.046 [2024-12-07 10:03:52.451651] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:24.046 [2024-12-07 10:03:52.451658] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451661] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451664] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ad0) 00:29:24.046 [2024-12-07 10:03:52.451670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.046 [2024-12-07 10:03:52.451679] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039300, cid 0, qid 0 00:29:24.046 [2024-12-07 10:03:52.451743] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.046 [2024-12-07 10:03:52.451748] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.046 [2024-12-07 10:03:52.451751] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039300) on tqpair=0xfe3ad0 00:29:24.046 [2024-12-07 10:03:52.451758] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:24.046 [2024-12-07 10:03:52.451767] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451771] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451774] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ad0) 00:29:24.046 [2024-12-07 10:03:52.451779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.046 [2024-12-07 10:03:52.451789] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039300, cid 0, qid 0 00:29:24.046 [2024-12-07 10:03:52.451859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.046 [2024-12-07 10:03:52.451865] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.046 [2024-12-07 10:03:52.451868] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.046 [2024-12-07 10:03:52.451871] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039300) on tqpair=0xfe3ad0 00:29:24.046 [2024-12-07 10:03:52.451875] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:24.047 [2024-12-07 10:03:52.451879] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:24.047 [2024-12-07 10:03:52.451886] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:24.047 [2024-12-07 10:03:52.451893] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:24.047 [2024-12-07 10:03:52.451900] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.451904] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ad0) 00:29:24.047 [2024-12-07 10:03:52.451909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.047 [2024-12-07 10:03:52.451919] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039300, cid 0, qid 0 00:29:24.047 [2024-12-07 10:03:52.452019] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.047 [2024-12-07 10:03:52.452025] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.047 [2024-12-07 10:03:52.452028] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452031] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ad0): datao=0, datal=4096, cccid=0 00:29:24.047 [2024-12-07 10:03:52.452035] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1039300) on tqpair(0xfe3ad0): expected_datao=0, payload_size=4096 00:29:24.047 [2024-12-07 10:03:52.452039] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452060] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452064] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452107] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.047 [2024-12-07 10:03:52.452113] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.047 [2024-12-07 10:03:52.452116] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452119] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039300) on tqpair=0xfe3ad0 00:29:24.047 [2024-12-07 10:03:52.452126] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:24.047 [2024-12-07 10:03:52.452130] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:24.047 [2024-12-07 10:03:52.452134] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:24.047 [2024-12-07 10:03:52.452138] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:24.047 [2024-12-07 10:03:52.452141] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:24.047 [2024-12-07 10:03:52.452146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:24.047 [2024-12-07 10:03:52.452153] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:24.047 [2024-12-07 10:03:52.452159] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452163] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452167] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ad0) 00:29:24.047 [2024-12-07 10:03:52.452173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:24.047 [2024-12-07 10:03:52.452184] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039300, cid 0, qid 0 00:29:24.047 [2024-12-07 10:03:52.452252] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.047 [2024-12-07 10:03:52.452258] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.047 [2024-12-07 10:03:52.452261] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452264] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039300) on tqpair=0xfe3ad0 00:29:24.047 [2024-12-07 10:03:52.452270] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452273] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452276] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ad0) 00:29:24.047 [2024-12-07 10:03:52.452282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.047 [2024-12-07 10:03:52.452287] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452291] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452294] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfe3ad0) 00:29:24.047 [2024-12-07 10:03:52.452299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.047 [2024-12-07 10:03:52.452304] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452307] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452310] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfe3ad0) 00:29:24.047 [2024-12-07 10:03:52.452315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.047 [2024-12-07 10:03:52.452320] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452323] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452327] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.047 [2024-12-07 10:03:52.452332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.047 [2024-12-07 10:03:52.452336] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:24.047 [2024-12-07 10:03:52.452346] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:24.047 [2024-12-07 10:03:52.452352] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452355] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ad0) 00:29:24.047 [2024-12-07 10:03:52.452361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.047 [2024-12-07 10:03:52.452372] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039300, cid 0, qid 0 00:29:24.047 [2024-12-07 10:03:52.452376] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039480, cid 1, qid 0 00:29:24.047 [2024-12-07 10:03:52.452381] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039600, cid 2, qid 0 00:29:24.047 [2024-12-07 10:03:52.452385] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.047 [2024-12-07 10:03:52.452389] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039900, cid 4, qid 0 00:29:24.047 [2024-12-07 10:03:52.452489] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.047 [2024-12-07 10:03:52.452495] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.047 [2024-12-07 10:03:52.452498] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452501] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039900) on tqpair=0xfe3ad0 00:29:24.047 [2024-12-07 10:03:52.452505] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:24.047 [2024-12-07 10:03:52.452510] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:24.047 [2024-12-07 10:03:52.452517] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:24.047 [2024-12-07 10:03:52.452524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:24.047 [2024-12-07 10:03:52.452530] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452533] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.047 [2024-12-07 10:03:52.452537] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ad0) 00:29:24.047 [2024-12-07 10:03:52.452542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:24.047 [2024-12-07 10:03:52.452552] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039900, cid 4, qid 0 00:29:24.047 [2024-12-07 10:03:52.452621] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.048 [2024-12-07 10:03:52.452627] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.048 [2024-12-07 10:03:52.452630] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.452633] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039900) on tqpair=0xfe3ad0 00:29:24.048 [2024-12-07 10:03:52.452684] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.452694] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.452700] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.452703] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ad0) 00:29:24.048 [2024-12-07 10:03:52.452709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.048 [2024-12-07 10:03:52.452719] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039900, cid 4, qid 0 00:29:24.048 [2024-12-07 10:03:52.452802] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.048 [2024-12-07 10:03:52.452808] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.048 [2024-12-07 10:03:52.452811] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.452814] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ad0): datao=0, datal=4096, cccid=4 00:29:24.048 [2024-12-07 10:03:52.452818] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1039900) on tqpair(0xfe3ad0): expected_datao=0, payload_size=4096 00:29:24.048 [2024-12-07 10:03:52.452822] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.452828] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.452831] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.494064] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.048 [2024-12-07 10:03:52.494075] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.048 [2024-12-07 10:03:52.494078] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.494084] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039900) on tqpair=0xfe3ad0 00:29:24.048 [2024-12-07 10:03:52.494094] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:24.048 [2024-12-07 10:03:52.494108] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.494119] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.494125] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.494128] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ad0) 00:29:24.048 [2024-12-07 10:03:52.494135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.048 [2024-12-07 10:03:52.494146] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039900, cid 4, qid 0 00:29:24.048 [2024-12-07 10:03:52.494239] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.048 [2024-12-07 10:03:52.494245] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.048 [2024-12-07 10:03:52.494248] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.494251] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ad0): datao=0, datal=4096, cccid=4 00:29:24.048 [2024-12-07 10:03:52.494255] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1039900) on tqpair(0xfe3ad0): expected_datao=0, payload_size=4096 00:29:24.048 [2024-12-07 10:03:52.494259] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.494265] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.494268] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.494309] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.048 [2024-12-07 10:03:52.494314] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.048 [2024-12-07 10:03:52.494317] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.494320] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039900) on tqpair=0xfe3ad0 00:29:24.048 [2024-12-07 10:03:52.494330] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.494340] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.494346] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.494350] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ad0) 00:29:24.048 [2024-12-07 10:03:52.494355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.048 [2024-12-07 10:03:52.494365] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039900, cid 4, qid 0 00:29:24.048 [2024-12-07 10:03:52.494448] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.048 [2024-12-07 10:03:52.494454] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.048 [2024-12-07 10:03:52.494457] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.494460] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ad0): datao=0, datal=4096, cccid=4 00:29:24.048 [2024-12-07 10:03:52.494464] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1039900) on tqpair(0xfe3ad0): expected_datao=0, payload_size=4096 00:29:24.048 [2024-12-07 10:03:52.494468] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.494473] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.494476] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.535103] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.048 [2024-12-07 10:03:52.535114] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.048 [2024-12-07 10:03:52.535118] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.535122] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039900) on tqpair=0xfe3ad0 00:29:24.048 [2024-12-07 10:03:52.535129] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.535136] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.535144] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.535150] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.535155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.535159] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.535164] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:24.048 [2024-12-07 10:03:52.535168] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:24.048 [2024-12-07 10:03:52.535173] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:24.048 [2024-12-07 10:03:52.535186] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.535190] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ad0) 00:29:24.048 [2024-12-07 10:03:52.535197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.048 [2024-12-07 10:03:52.535203] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.535206] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.048 [2024-12-07 10:03:52.535209] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe3ad0) 00:29:24.048 [2024-12-07 10:03:52.535214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.048 [2024-12-07 10:03:52.535227] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039900, cid 4, qid 0 00:29:24.048 [2024-12-07 10:03:52.535231] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039a80, cid 5, qid 0 00:29:24.049 [2024-12-07 10:03:52.535314] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.049 [2024-12-07 10:03:52.535320] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.049 [2024-12-07 10:03:52.535323] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535326] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039900) on tqpair=0xfe3ad0 00:29:24.049 [2024-12-07 10:03:52.535332] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.049 [2024-12-07 10:03:52.535337] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.049 [2024-12-07 10:03:52.535340] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535343] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039a80) on tqpair=0xfe3ad0 00:29:24.049 [2024-12-07 10:03:52.535351] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535355] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe3ad0) 00:29:24.049 [2024-12-07 10:03:52.535363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.049 [2024-12-07 10:03:52.535373] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039a80, cid 5, qid 0 00:29:24.049 [2024-12-07 10:03:52.535442] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.049 [2024-12-07 10:03:52.535448] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.049 [2024-12-07 10:03:52.535451] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535454] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039a80) on tqpair=0xfe3ad0 00:29:24.049 [2024-12-07 10:03:52.535462] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535465] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe3ad0) 00:29:24.049 [2024-12-07 10:03:52.535471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.049 [2024-12-07 10:03:52.535480] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039a80, cid 5, qid 0 00:29:24.049 [2024-12-07 10:03:52.535552] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.049 [2024-12-07 10:03:52.535558] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.049 [2024-12-07 10:03:52.535561] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535564] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039a80) on tqpair=0xfe3ad0 00:29:24.049 [2024-12-07 10:03:52.535572] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535575] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe3ad0) 00:29:24.049 [2024-12-07 10:03:52.535581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.049 [2024-12-07 10:03:52.535590] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039a80, cid 5, qid 0 00:29:24.049 [2024-12-07 10:03:52.535661] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.049 [2024-12-07 10:03:52.535666] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.049 [2024-12-07 10:03:52.535669] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535673] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039a80) on tqpair=0xfe3ad0 00:29:24.049 [2024-12-07 10:03:52.535686] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535690] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe3ad0) 00:29:24.049 [2024-12-07 10:03:52.535696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.049 [2024-12-07 10:03:52.535701] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535705] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ad0) 00:29:24.049 [2024-12-07 10:03:52.535710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.049 [2024-12-07 10:03:52.535716] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535719] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xfe3ad0) 00:29:24.049 [2024-12-07 10:03:52.535724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.049 [2024-12-07 10:03:52.535732] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535736] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfe3ad0) 00:29:24.049 [2024-12-07 10:03:52.535741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.049 [2024-12-07 10:03:52.535753] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039a80, cid 5, qid 0 00:29:24.049 [2024-12-07 10:03:52.535758] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039900, cid 4, qid 0 00:29:24.049 [2024-12-07 10:03:52.535762] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039c00, cid 6, qid 0 00:29:24.049 [2024-12-07 10:03:52.535766] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039d80, cid 7, qid 0 00:29:24.049 [2024-12-07 10:03:52.535926] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.049 [2024-12-07 10:03:52.535933] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.049 [2024-12-07 10:03:52.535936] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535939] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ad0): datao=0, datal=8192, cccid=5 00:29:24.049 [2024-12-07 10:03:52.535943] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1039a80) on tqpair(0xfe3ad0): expected_datao=0, payload_size=8192 00:29:24.049 [2024-12-07 10:03:52.535953] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535959] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535963] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535968] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.049 [2024-12-07 10:03:52.535973] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.049 [2024-12-07 10:03:52.535976] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535979] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ad0): datao=0, datal=512, cccid=4 00:29:24.049 [2024-12-07 10:03:52.535983] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1039900) on tqpair(0xfe3ad0): expected_datao=0, payload_size=512 00:29:24.049 [2024-12-07 10:03:52.535987] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535992] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.535995] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.536000] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.049 [2024-12-07 10:03:52.536005] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.049 [2024-12-07 10:03:52.536008] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.049 [2024-12-07 10:03:52.536011] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ad0): datao=0, datal=512, cccid=6 00:29:24.049 [2024-12-07 10:03:52.536015] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1039c00) on tqpair(0xfe3ad0): expected_datao=0, payload_size=512 00:29:24.050 [2024-12-07 10:03:52.536019] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.050 [2024-12-07 10:03:52.536024] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.050 [2024-12-07 10:03:52.536028] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.050 [2024-12-07 10:03:52.536032] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.050 [2024-12-07 10:03:52.536037] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.050 [2024-12-07 10:03:52.536040] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.050 [2024-12-07 10:03:52.536043] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ad0): datao=0, datal=4096, cccid=7 00:29:24.050 [2024-12-07 10:03:52.536047] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1039d80) on tqpair(0xfe3ad0): expected_datao=0, payload_size=4096 00:29:24.050 [2024-12-07 10:03:52.536051] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.050 [2024-12-07 10:03:52.536057] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.050 [2024-12-07 10:03:52.536060] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.050 [2024-12-07 10:03:52.536067] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.050 [2024-12-07 10:03:52.536074] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.050 [2024-12-07 10:03:52.536077] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.050 [2024-12-07 10:03:52.536081] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039a80) on tqpair=0xfe3ad0 00:29:24.050 [2024-12-07 10:03:52.536090] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.050 [2024-12-07 10:03:52.536096] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.050 [2024-12-07 10:03:52.536099] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.050 [2024-12-07 10:03:52.536102] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039900) on tqpair=0xfe3ad0 00:29:24.050 [2024-12-07 10:03:52.536111] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.050 [2024-12-07 10:03:52.536116] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.050 [2024-12-07 10:03:52.536119] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.050 [2024-12-07 10:03:52.536122] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039c00) on tqpair=0xfe3ad0 00:29:24.050 [2024-12-07 10:03:52.536128] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.050 [2024-12-07 10:03:52.536133] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.050 [2024-12-07 10:03:52.536136] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.050 [2024-12-07 10:03:52.536139] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039d80) on tqpair=0xfe3ad0 00:29:24.050 ===================================================== 00:29:24.050 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.050 ===================================================== 00:29:24.050 Controller Capabilities/Features 00:29:24.050 ================================ 00:29:24.050 Vendor ID: 8086 00:29:24.050 Subsystem Vendor ID: 8086 00:29:24.050 Serial Number: SPDK00000000000001 00:29:24.050 Model Number: SPDK bdev Controller 00:29:24.050 Firmware Version: 24.09.1 00:29:24.050 Recommended Arb Burst: 6 00:29:24.050 IEEE OUI Identifier: e4 d2 5c 00:29:24.050 Multi-path I/O 00:29:24.050 May have multiple subsystem ports: Yes 00:29:24.050 May have multiple controllers: Yes 00:29:24.050 Associated with SR-IOV VF: No 00:29:24.050 Max Data Transfer Size: 131072 00:29:24.050 Max Number of Namespaces: 32 00:29:24.050 Max Number of I/O Queues: 127 00:29:24.050 NVMe Specification Version (VS): 1.3 00:29:24.050 NVMe Specification Version (Identify): 1.3 00:29:24.050 Maximum Queue Entries: 128 00:29:24.050 Contiguous Queues Required: Yes 00:29:24.050 Arbitration Mechanisms Supported 00:29:24.050 Weighted Round Robin: Not Supported 00:29:24.050 Vendor Specific: Not Supported 00:29:24.050 Reset Timeout: 15000 ms 00:29:24.050 Doorbell Stride: 4 bytes 00:29:24.050 NVM Subsystem Reset: Not Supported 00:29:24.050 Command Sets Supported 00:29:24.050 NVM Command Set: Supported 00:29:24.050 Boot Partition: Not Supported 00:29:24.050 Memory Page Size Minimum: 4096 bytes 00:29:24.050 Memory Page Size Maximum: 4096 bytes 00:29:24.050 Persistent Memory Region: Not Supported 00:29:24.050 Optional Asynchronous Events Supported 00:29:24.050 Namespace Attribute Notices: Supported 00:29:24.050 Firmware Activation Notices: Not Supported 00:29:24.050 ANA Change Notices: Not Supported 00:29:24.050 PLE Aggregate Log Change Notices: Not Supported 00:29:24.050 LBA Status Info Alert Notices: Not Supported 00:29:24.050 EGE Aggregate Log Change Notices: Not Supported 00:29:24.050 Normal NVM Subsystem Shutdown event: Not Supported 00:29:24.050 Zone Descriptor Change Notices: Not Supported 00:29:24.050 Discovery Log Change Notices: Not Supported 00:29:24.050 Controller Attributes 00:29:24.050 128-bit Host Identifier: Supported 00:29:24.050 Non-Operational Permissive Mode: Not Supported 00:29:24.050 NVM Sets: Not Supported 00:29:24.050 Read Recovery Levels: Not Supported 00:29:24.050 Endurance Groups: Not Supported 00:29:24.050 Predictable Latency Mode: Not Supported 00:29:24.050 Traffic Based Keep ALive: Not Supported 00:29:24.050 Namespace Granularity: Not Supported 00:29:24.050 SQ Associations: Not Supported 00:29:24.050 UUID List: Not Supported 00:29:24.050 Multi-Domain Subsystem: Not Supported 00:29:24.050 Fixed Capacity Management: Not Supported 00:29:24.050 Variable Capacity Management: Not Supported 00:29:24.050 Delete Endurance Group: Not Supported 00:29:24.050 Delete NVM Set: Not Supported 00:29:24.050 Extended LBA Formats Supported: Not Supported 00:29:24.050 Flexible Data Placement Supported: Not Supported 00:29:24.050 00:29:24.050 Controller Memory Buffer Support 00:29:24.050 ================================ 00:29:24.050 Supported: No 00:29:24.050 00:29:24.050 Persistent Memory Region Support 00:29:24.050 ================================ 00:29:24.050 Supported: No 00:29:24.050 00:29:24.050 Admin Command Set Attributes 00:29:24.050 ============================ 00:29:24.050 Security Send/Receive: Not Supported 00:29:24.050 Format NVM: Not Supported 00:29:24.050 Firmware Activate/Download: Not Supported 00:29:24.051 Namespace Management: Not Supported 00:29:24.051 Device Self-Test: Not Supported 00:29:24.051 Directives: Not Supported 00:29:24.051 NVMe-MI: Not Supported 00:29:24.051 Virtualization Management: Not Supported 00:29:24.051 Doorbell Buffer Config: Not Supported 00:29:24.051 Get LBA Status Capability: Not Supported 00:29:24.051 Command & Feature Lockdown Capability: Not Supported 00:29:24.051 Abort Command Limit: 4 00:29:24.051 Async Event Request Limit: 4 00:29:24.051 Number of Firmware Slots: N/A 00:29:24.051 Firmware Slot 1 Read-Only: N/A 00:29:24.051 Firmware Activation Without Reset: N/A 00:29:24.051 Multiple Update Detection Support: N/A 00:29:24.051 Firmware Update Granularity: No Information Provided 00:29:24.051 Per-Namespace SMART Log: No 00:29:24.051 Asymmetric Namespace Access Log Page: Not Supported 00:29:24.051 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:24.051 Command Effects Log Page: Supported 00:29:24.051 Get Log Page Extended Data: Supported 00:29:24.051 Telemetry Log Pages: Not Supported 00:29:24.051 Persistent Event Log Pages: Not Supported 00:29:24.051 Supported Log Pages Log Page: May Support 00:29:24.051 Commands Supported & Effects Log Page: Not Supported 00:29:24.051 Feature Identifiers & Effects Log Page:May Support 00:29:24.051 NVMe-MI Commands & Effects Log Page: May Support 00:29:24.051 Data Area 4 for Telemetry Log: Not Supported 00:29:24.051 Error Log Page Entries Supported: 128 00:29:24.051 Keep Alive: Supported 00:29:24.051 Keep Alive Granularity: 10000 ms 00:29:24.051 00:29:24.051 NVM Command Set Attributes 00:29:24.051 ========================== 00:29:24.051 Submission Queue Entry Size 00:29:24.051 Max: 64 00:29:24.051 Min: 64 00:29:24.051 Completion Queue Entry Size 00:29:24.051 Max: 16 00:29:24.051 Min: 16 00:29:24.051 Number of Namespaces: 32 00:29:24.051 Compare Command: Supported 00:29:24.051 Write Uncorrectable Command: Not Supported 00:29:24.051 Dataset Management Command: Supported 00:29:24.051 Write Zeroes Command: Supported 00:29:24.051 Set Features Save Field: Not Supported 00:29:24.051 Reservations: Supported 00:29:24.051 Timestamp: Not Supported 00:29:24.051 Copy: Supported 00:29:24.051 Volatile Write Cache: Present 00:29:24.051 Atomic Write Unit (Normal): 1 00:29:24.051 Atomic Write Unit (PFail): 1 00:29:24.051 Atomic Compare & Write Unit: 1 00:29:24.051 Fused Compare & Write: Supported 00:29:24.051 Scatter-Gather List 00:29:24.051 SGL Command Set: Supported 00:29:24.051 SGL Keyed: Supported 00:29:24.051 SGL Bit Bucket Descriptor: Not Supported 00:29:24.051 SGL Metadata Pointer: Not Supported 00:29:24.051 Oversized SGL: Not Supported 00:29:24.051 SGL Metadata Address: Not Supported 00:29:24.051 SGL Offset: Supported 00:29:24.051 Transport SGL Data Block: Not Supported 00:29:24.051 Replay Protected Memory Block: Not Supported 00:29:24.051 00:29:24.051 Firmware Slot Information 00:29:24.051 ========================= 00:29:24.051 Active slot: 1 00:29:24.051 Slot 1 Firmware Revision: 24.09.1 00:29:24.051 00:29:24.051 00:29:24.051 Commands Supported and Effects 00:29:24.051 ============================== 00:29:24.051 Admin Commands 00:29:24.051 -------------- 00:29:24.051 Get Log Page (02h): Supported 00:29:24.051 Identify (06h): Supported 00:29:24.051 Abort (08h): Supported 00:29:24.051 Set Features (09h): Supported 00:29:24.051 Get Features (0Ah): Supported 00:29:24.051 Asynchronous Event Request (0Ch): Supported 00:29:24.051 Keep Alive (18h): Supported 00:29:24.051 I/O Commands 00:29:24.051 ------------ 00:29:24.051 Flush (00h): Supported LBA-Change 00:29:24.051 Write (01h): Supported LBA-Change 00:29:24.051 Read (02h): Supported 00:29:24.051 Compare (05h): Supported 00:29:24.051 Write Zeroes (08h): Supported LBA-Change 00:29:24.051 Dataset Management (09h): Supported LBA-Change 00:29:24.051 Copy (19h): Supported LBA-Change 00:29:24.051 00:29:24.051 Error Log 00:29:24.051 ========= 00:29:24.051 00:29:24.051 Arbitration 00:29:24.051 =========== 00:29:24.051 Arbitration Burst: 1 00:29:24.051 00:29:24.051 Power Management 00:29:24.051 ================ 00:29:24.051 Number of Power States: 1 00:29:24.051 Current Power State: Power State #0 00:29:24.051 Power State #0: 00:29:24.051 Max Power: 0.00 W 00:29:24.051 Non-Operational State: Operational 00:29:24.051 Entry Latency: Not Reported 00:29:24.051 Exit Latency: Not Reported 00:29:24.051 Relative Read Throughput: 0 00:29:24.051 Relative Read Latency: 0 00:29:24.051 Relative Write Throughput: 0 00:29:24.051 Relative Write Latency: 0 00:29:24.051 Idle Power: Not Reported 00:29:24.051 Active Power: Not Reported 00:29:24.051 Non-Operational Permissive Mode: Not Supported 00:29:24.051 00:29:24.051 Health Information 00:29:24.051 ================== 00:29:24.051 Critical Warnings: 00:29:24.051 Available Spare Space: OK 00:29:24.051 Temperature: OK 00:29:24.051 Device Reliability: OK 00:29:24.051 Read Only: No 00:29:24.051 Volatile Memory Backup: OK 00:29:24.051 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:24.051 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:24.051 Available Spare: 0% 00:29:24.051 Available Spare Threshold: 0% 00:29:24.051 Life Percentage U[2024-12-07 10:03:52.536224] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.051 [2024-12-07 10:03:52.536229] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfe3ad0) 00:29:24.051 [2024-12-07 10:03:52.536234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.051 [2024-12-07 10:03:52.536246] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039d80, cid 7, qid 0 00:29:24.051 [2024-12-07 10:03:52.536327] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.051 [2024-12-07 10:03:52.536333] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.051 [2024-12-07 10:03:52.536336] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.051 [2024-12-07 10:03:52.536339] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039d80) on tqpair=0xfe3ad0 00:29:24.051 [2024-12-07 10:03:52.536365] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:24.051 [2024-12-07 10:03:52.536375] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039300) on tqpair=0xfe3ad0 00:29:24.051 [2024-12-07 10:03:52.536380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.052 [2024-12-07 10:03:52.536385] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039480) on tqpair=0xfe3ad0 00:29:24.052 [2024-12-07 10:03:52.536389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.052 [2024-12-07 10:03:52.536393] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039600) on tqpair=0xfe3ad0 00:29:24.052 [2024-12-07 10:03:52.536397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.052 [2024-12-07 10:03:52.536402] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.052 [2024-12-07 10:03:52.536405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.052 [2024-12-07 10:03:52.536412] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.536416] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.536419] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.052 [2024-12-07 10:03:52.536424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.052 [2024-12-07 10:03:52.536437] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.052 [2024-12-07 10:03:52.539954] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.052 [2024-12-07 10:03:52.539961] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.052 [2024-12-07 10:03:52.539964] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.539967] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.052 [2024-12-07 10:03:52.539973] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.539977] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.539980] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.052 [2024-12-07 10:03:52.539986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.052 [2024-12-07 10:03:52.540000] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.052 [2024-12-07 10:03:52.540154] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.052 [2024-12-07 10:03:52.540160] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.052 [2024-12-07 10:03:52.540163] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540166] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.052 [2024-12-07 10:03:52.540170] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:24.052 [2024-12-07 10:03:52.540174] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:24.052 [2024-12-07 10:03:52.540182] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540186] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540189] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.052 [2024-12-07 10:03:52.540194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.052 [2024-12-07 10:03:52.540204] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.052 [2024-12-07 10:03:52.540270] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.052 [2024-12-07 10:03:52.540276] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.052 [2024-12-07 10:03:52.540279] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540282] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.052 [2024-12-07 10:03:52.540290] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540294] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540297] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.052 [2024-12-07 10:03:52.540303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.052 [2024-12-07 10:03:52.540312] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.052 [2024-12-07 10:03:52.540386] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.052 [2024-12-07 10:03:52.540392] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.052 [2024-12-07 10:03:52.540395] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540398] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.052 [2024-12-07 10:03:52.540406] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540409] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540415] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.052 [2024-12-07 10:03:52.540421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.052 [2024-12-07 10:03:52.540430] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.052 [2024-12-07 10:03:52.540493] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.052 [2024-12-07 10:03:52.540499] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.052 [2024-12-07 10:03:52.540502] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540505] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.052 [2024-12-07 10:03:52.540514] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540517] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540521] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.052 [2024-12-07 10:03:52.540526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.052 [2024-12-07 10:03:52.540536] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.052 [2024-12-07 10:03:52.540603] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.052 [2024-12-07 10:03:52.540609] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.052 [2024-12-07 10:03:52.540612] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540615] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.052 [2024-12-07 10:03:52.540623] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540627] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540630] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.052 [2024-12-07 10:03:52.540635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.052 [2024-12-07 10:03:52.540644] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.052 [2024-12-07 10:03:52.540720] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.052 [2024-12-07 10:03:52.540726] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.052 [2024-12-07 10:03:52.540729] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540732] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.052 [2024-12-07 10:03:52.540741] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540744] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.052 [2024-12-07 10:03:52.540753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.052 [2024-12-07 10:03:52.540762] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.052 [2024-12-07 10:03:52.540836] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.052 [2024-12-07 10:03:52.540842] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.052 [2024-12-07 10:03:52.540845] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.052 [2024-12-07 10:03:52.540849] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.052 [2024-12-07 10:03:52.540857] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.540860] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.540863] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.053 [2024-12-07 10:03:52.540870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.053 [2024-12-07 10:03:52.540880] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.053 [2024-12-07 10:03:52.540952] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.053 [2024-12-07 10:03:52.540958] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.053 [2024-12-07 10:03:52.540961] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.540965] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.053 [2024-12-07 10:03:52.540973] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.540977] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.540980] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.053 [2024-12-07 10:03:52.540985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.053 [2024-12-07 10:03:52.540995] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.053 [2024-12-07 10:03:52.541070] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.053 [2024-12-07 10:03:52.541076] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.053 [2024-12-07 10:03:52.541079] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541082] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.053 [2024-12-07 10:03:52.541090] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541094] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541097] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.053 [2024-12-07 10:03:52.541103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.053 [2024-12-07 10:03:52.541112] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.053 [2024-12-07 10:03:52.541187] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.053 [2024-12-07 10:03:52.541193] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.053 [2024-12-07 10:03:52.541196] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541199] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.053 [2024-12-07 10:03:52.541207] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541211] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541214] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.053 [2024-12-07 10:03:52.541219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.053 [2024-12-07 10:03:52.541229] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.053 [2024-12-07 10:03:52.541305] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.053 [2024-12-07 10:03:52.541310] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.053 [2024-12-07 10:03:52.541313] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541317] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.053 [2024-12-07 10:03:52.541325] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541328] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541331] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.053 [2024-12-07 10:03:52.541337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.053 [2024-12-07 10:03:52.541348] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.053 [2024-12-07 10:03:52.541413] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.053 [2024-12-07 10:03:52.541419] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.053 [2024-12-07 10:03:52.541422] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541426] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.053 [2024-12-07 10:03:52.541434] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541437] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541440] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.053 [2024-12-07 10:03:52.541446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.053 [2024-12-07 10:03:52.541455] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.053 [2024-12-07 10:03:52.541519] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.053 [2024-12-07 10:03:52.541525] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.053 [2024-12-07 10:03:52.541528] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541531] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.053 [2024-12-07 10:03:52.541539] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541543] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541546] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.053 [2024-12-07 10:03:52.541552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.053 [2024-12-07 10:03:52.541561] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.053 [2024-12-07 10:03:52.541637] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.053 [2024-12-07 10:03:52.541643] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.053 [2024-12-07 10:03:52.541645] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541649] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.053 [2024-12-07 10:03:52.541657] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541661] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541664] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.053 [2024-12-07 10:03:52.541669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.053 [2024-12-07 10:03:52.541678] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.053 [2024-12-07 10:03:52.541754] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.053 [2024-12-07 10:03:52.541759] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.053 [2024-12-07 10:03:52.541763] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541766] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.053 [2024-12-07 10:03:52.541774] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541778] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.053 [2024-12-07 10:03:52.541781] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.053 [2024-12-07 10:03:52.541786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.053 [2024-12-07 10:03:52.541795] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.053 [2024-12-07 10:03:52.541862] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.053 [2024-12-07 10:03:52.541868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.054 [2024-12-07 10:03:52.541871] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.541874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.054 [2024-12-07 10:03:52.541882] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.541885] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.541889] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.054 [2024-12-07 10:03:52.541894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.054 [2024-12-07 10:03:52.541904] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.054 [2024-12-07 10:03:52.541988] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.054 [2024-12-07 10:03:52.541994] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.054 [2024-12-07 10:03:52.541997] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542000] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.054 [2024-12-07 10:03:52.542008] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542012] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542015] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.054 [2024-12-07 10:03:52.542021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.054 [2024-12-07 10:03:52.542030] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.054 [2024-12-07 10:03:52.542105] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.054 [2024-12-07 10:03:52.542111] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.054 [2024-12-07 10:03:52.542114] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542118] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.054 [2024-12-07 10:03:52.542126] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542129] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542132] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.054 [2024-12-07 10:03:52.542138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.054 [2024-12-07 10:03:52.542147] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.054 [2024-12-07 10:03:52.542223] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.054 [2024-12-07 10:03:52.542228] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.054 [2024-12-07 10:03:52.542231] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542234] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.054 [2024-12-07 10:03:52.542242] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542246] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542249] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.054 [2024-12-07 10:03:52.542255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.054 [2024-12-07 10:03:52.542264] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.054 [2024-12-07 10:03:52.542330] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.054 [2024-12-07 10:03:52.542337] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.054 [2024-12-07 10:03:52.542341] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542344] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.054 [2024-12-07 10:03:52.542353] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542356] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542359] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.054 [2024-12-07 10:03:52.542365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.054 [2024-12-07 10:03:52.542374] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.054 [2024-12-07 10:03:52.542458] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.054 [2024-12-07 10:03:52.542463] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.054 [2024-12-07 10:03:52.542466] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542470] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.054 [2024-12-07 10:03:52.542478] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542481] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542484] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.054 [2024-12-07 10:03:52.542490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.054 [2024-12-07 10:03:52.542500] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.054 [2024-12-07 10:03:52.542592] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.054 [2024-12-07 10:03:52.542597] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.054 [2024-12-07 10:03:52.542600] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542603] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.054 [2024-12-07 10:03:52.542613] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542616] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542619] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.054 [2024-12-07 10:03:52.542625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.054 [2024-12-07 10:03:52.542634] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.054 [2024-12-07 10:03:52.542700] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.054 [2024-12-07 10:03:52.542706] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.054 [2024-12-07 10:03:52.542709] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542712] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.054 [2024-12-07 10:03:52.542720] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542724] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542727] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.054 [2024-12-07 10:03:52.542733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.054 [2024-12-07 10:03:52.542742] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.054 [2024-12-07 10:03:52.542814] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.054 [2024-12-07 10:03:52.542820] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.054 [2024-12-07 10:03:52.542825] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542828] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.054 [2024-12-07 10:03:52.542836] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542840] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.054 [2024-12-07 10:03:52.542843] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.054 [2024-12-07 10:03:52.542849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.054 [2024-12-07 10:03:52.542858] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.054 [2024-12-07 10:03:52.546954] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.054 [2024-12-07 10:03:52.546962] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.054 [2024-12-07 10:03:52.546965] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.055 [2024-12-07 10:03:52.546968] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.055 [2024-12-07 10:03:52.546979] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.055 [2024-12-07 10:03:52.546982] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.055 [2024-12-07 10:03:52.546986] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ad0) 00:29:24.055 [2024-12-07 10:03:52.546992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.055 [2024-12-07 10:03:52.547002] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1039780, cid 3, qid 0 00:29:24.055 [2024-12-07 10:03:52.547135] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.055 [2024-12-07 10:03:52.547141] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.055 [2024-12-07 10:03:52.547144] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.055 [2024-12-07 10:03:52.547147] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1039780) on tqpair=0xfe3ad0 00:29:24.055 [2024-12-07 10:03:52.547153] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:29:24.055 sed: 0% 00:29:24.055 Data Units Read: 0 00:29:24.055 Data Units Written: 0 00:29:24.055 Host Read Commands: 0 00:29:24.055 Host Write Commands: 0 00:29:24.055 Controller Busy Time: 0 minutes 00:29:24.055 Power Cycles: 0 00:29:24.055 Power On Hours: 0 hours 00:29:24.055 Unsafe Shutdowns: 0 00:29:24.055 Unrecoverable Media Errors: 0 00:29:24.055 Lifetime Error Log Entries: 0 00:29:24.055 Warning Temperature Time: 0 minutes 00:29:24.055 Critical Temperature Time: 0 minutes 00:29:24.055 00:29:24.055 Number of Queues 00:29:24.055 ================ 00:29:24.055 Number of I/O Submission Queues: 127 00:29:24.055 Number of I/O Completion Queues: 127 00:29:24.055 00:29:24.055 Active Namespaces 00:29:24.055 ================= 00:29:24.055 Namespace ID:1 00:29:24.055 Error Recovery Timeout: Unlimited 00:29:24.055 Command Set Identifier: NVM (00h) 00:29:24.055 Deallocate: Supported 00:29:24.055 Deallocated/Unwritten Error: Not Supported 00:29:24.055 Deallocated Read Value: Unknown 00:29:24.055 Deallocate in Write Zeroes: Not Supported 00:29:24.055 Deallocated Guard Field: 0xFFFF 00:29:24.055 Flush: Supported 00:29:24.055 Reservation: Supported 00:29:24.055 Namespace Sharing Capabilities: Multiple Controllers 00:29:24.055 Size (in LBAs): 131072 (0GiB) 00:29:24.055 Capacity (in LBAs): 131072 (0GiB) 00:29:24.055 Utilization (in LBAs): 131072 (0GiB) 00:29:24.055 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:24.055 EUI64: ABCDEF0123456789 00:29:24.055 UUID: b1b3fbf7-7c81-4580-aceb-bd576566f4a1 00:29:24.055 Thin Provisioning: Not Supported 00:29:24.055 Per-NS Atomic Units: Yes 00:29:24.055 Atomic Boundary Size (Normal): 0 00:29:24.055 Atomic Boundary Size (PFail): 0 00:29:24.055 Atomic Boundary Offset: 0 00:29:24.055 Maximum Single Source Range Length: 65535 00:29:24.055 Maximum Copy Length: 65535 00:29:24.055 Maximum Source Range Count: 1 00:29:24.055 NGUID/EUI64 Never Reused: No 00:29:24.055 Namespace Write Protected: No 00:29:24.055 Number of LBA Formats: 1 00:29:24.055 Current LBA Format: LBA Format #00 00:29:24.055 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:24.055 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:24.055 rmmod nvme_tcp 00:29:24.055 rmmod nvme_fabrics 00:29:24.055 rmmod nvme_keyring 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 1379753 ']' 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 1379753 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1379753 ']' 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1379753 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1379753 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1379753' 00:29:24.055 killing process with pid 1379753 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1379753 00:29:24.055 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1379753 00:29:24.315 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:24.315 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:24.315 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:24.315 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:24.315 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:29:24.315 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:29:24.315 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:24.315 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:24.315 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:24.315 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.315 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.315 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.849 10:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.849 00:29:26.849 real 0m8.982s 00:29:26.849 user 0m5.173s 00:29:26.849 sys 0m4.637s 00:29:26.849 10:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:26.849 10:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:26.849 ************************************ 00:29:26.849 END TEST nvmf_identify 00:29:26.849 ************************************ 00:29:26.849 10:03:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:26.849 10:03:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:26.849 10:03:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:26.849 10:03:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.849 ************************************ 00:29:26.849 START TEST nvmf_perf 00:29:26.849 ************************************ 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:26.849 * Looking for test storage... 00:29:26.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:26.849 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:26.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.850 --rc genhtml_branch_coverage=1 00:29:26.850 --rc genhtml_function_coverage=1 00:29:26.850 --rc genhtml_legend=1 00:29:26.850 --rc geninfo_all_blocks=1 00:29:26.850 --rc geninfo_unexecuted_blocks=1 00:29:26.850 00:29:26.850 ' 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:26.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.850 --rc genhtml_branch_coverage=1 00:29:26.850 --rc genhtml_function_coverage=1 00:29:26.850 --rc genhtml_legend=1 00:29:26.850 --rc geninfo_all_blocks=1 00:29:26.850 --rc geninfo_unexecuted_blocks=1 00:29:26.850 00:29:26.850 ' 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:26.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.850 --rc genhtml_branch_coverage=1 00:29:26.850 --rc genhtml_function_coverage=1 00:29:26.850 --rc genhtml_legend=1 00:29:26.850 --rc geninfo_all_blocks=1 00:29:26.850 --rc geninfo_unexecuted_blocks=1 00:29:26.850 00:29:26.850 ' 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:26.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.850 --rc genhtml_branch_coverage=1 00:29:26.850 --rc genhtml_function_coverage=1 00:29:26.850 --rc genhtml_legend=1 00:29:26.850 --rc geninfo_all_blocks=1 00:29:26.850 --rc geninfo_unexecuted_blocks=1 00:29:26.850 00:29:26.850 ' 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:26.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.850 10:03:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:32.122 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:32.122 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:32.122 Found net devices under 0000:86:00.0: cvl_0_0 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:32.122 Found net devices under 0000:86:00.1: cvl_0_1 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.122 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:32.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:29:32.123 00:29:32.123 --- 10.0.0.2 ping statistics --- 00:29:32.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.123 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:29:32.123 00:29:32.123 --- 10.0.0.1 ping statistics --- 00:29:32.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.123 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=1383391 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 1383391 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1383391 ']' 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:32.123 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:32.123 [2024-12-07 10:04:00.747330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:32.123 [2024-12-07 10:04:00.747388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.123 [2024-12-07 10:04:00.808556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:32.382 [2024-12-07 10:04:00.852262] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.382 [2024-12-07 10:04:00.852302] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.382 [2024-12-07 10:04:00.852310] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.382 [2024-12-07 10:04:00.852316] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.382 [2024-12-07 10:04:00.852322] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.382 [2024-12-07 10:04:00.852359] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.382 [2024-12-07 10:04:00.852454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.382 [2024-12-07 10:04:00.852479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.382 [2024-12-07 10:04:00.852480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.382 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:32.382 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:29:32.382 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:32.382 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:32.382 10:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:32.382 10:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.382 10:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:32.382 10:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:35.664 10:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:35.664 10:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:35.664 10:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:29:35.664 10:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:35.922 10:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:35.922 10:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:29:35.922 10:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:35.922 10:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:35.922 10:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:35.922 [2024-12-07 10:04:04.621404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.179 10:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:36.179 10:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:36.179 10:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:36.437 10:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:36.437 10:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:36.695 10:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.952 [2024-12-07 10:04:05.445825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.952 10:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.210 10:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:29:37.210 10:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:37.210 10:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:37.210 10:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:38.585 Initializing NVMe Controllers 00:29:38.585 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:29:38.585 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:29:38.585 Initialization complete. Launching workers. 00:29:38.585 ======================================================== 00:29:38.585 Latency(us) 00:29:38.585 Device Information : IOPS MiB/s Average min max 00:29:38.585 PCIE (0000:5e:00.0) NSID 1 from core 0: 96557.05 377.18 330.77 14.36 4377.74 00:29:38.585 ======================================================== 00:29:38.585 Total : 96557.05 377.18 330.77 14.36 4377.74 00:29:38.585 00:29:38.585 10:04:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:39.522 Initializing NVMe Controllers 00:29:39.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:39.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:39.522 Initialization complete. Launching workers. 00:29:39.522 ======================================================== 00:29:39.522 Latency(us) 00:29:39.522 Device Information : IOPS MiB/s Average min max 00:29:39.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 63.00 0.25 16094.88 123.62 44673.32 00:29:39.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15208.33 7954.23 47884.60 00:29:39.522 ======================================================== 00:29:39.522 Total : 129.00 0.50 15641.30 123.62 47884.60 00:29:39.522 00:29:39.522 10:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:41.425 Initializing NVMe Controllers 00:29:41.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:41.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:41.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:41.426 Initialization complete. Launching workers. 00:29:41.426 ======================================================== 00:29:41.426 Latency(us) 00:29:41.426 Device Information : IOPS MiB/s Average min max 00:29:41.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10743.86 41.97 2978.68 440.22 6237.88 00:29:41.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3874.95 15.14 8308.30 7163.02 15956.49 00:29:41.426 ======================================================== 00:29:41.426 Total : 14618.81 57.10 4391.38 440.22 15956.49 00:29:41.426 00:29:41.426 10:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:41.426 10:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:41.426 10:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:43.962 Initializing NVMe Controllers 00:29:43.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.962 Controller IO queue size 128, less than required. 00:29:43.962 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.962 Controller IO queue size 128, less than required. 00:29:43.962 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:43.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:43.962 Initialization complete. Launching workers. 00:29:43.962 ======================================================== 00:29:43.962 Latency(us) 00:29:43.962 Device Information : IOPS MiB/s Average min max 00:29:43.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1620.62 405.16 80271.67 55659.61 152868.49 00:29:43.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 620.21 155.05 217996.12 64765.93 333757.61 00:29:43.962 ======================================================== 00:29:43.962 Total : 2240.83 560.21 118390.46 55659.61 333757.61 00:29:43.962 00:29:43.962 10:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:43.962 No valid NVMe controllers or AIO or URING devices found 00:29:43.962 Initializing NVMe Controllers 00:29:43.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.962 Controller IO queue size 128, less than required. 00:29:43.962 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.962 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:43.962 Controller IO queue size 128, less than required. 00:29:43.962 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.962 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:43.962 WARNING: Some requested NVMe devices were skipped 00:29:43.962 10:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:46.499 Initializing NVMe Controllers 00:29:46.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.499 Controller IO queue size 128, less than required. 00:29:46.499 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.499 Controller IO queue size 128, less than required. 00:29:46.499 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:46.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:46.499 Initialization complete. Launching workers. 00:29:46.499 00:29:46.499 ==================== 00:29:46.499 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:46.499 TCP transport: 00:29:46.499 polls: 12190 00:29:46.499 idle_polls: 8110 00:29:46.499 sock_completions: 4080 00:29:46.499 nvme_completions: 6095 00:29:46.499 submitted_requests: 9124 00:29:46.499 queued_requests: 1 00:29:46.499 00:29:46.499 ==================== 00:29:46.499 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:46.499 TCP transport: 00:29:46.499 polls: 13289 00:29:46.499 idle_polls: 8020 00:29:46.499 sock_completions: 5269 00:29:46.499 nvme_completions: 6449 00:29:46.499 submitted_requests: 9680 00:29:46.499 queued_requests: 1 00:29:46.499 ======================================================== 00:29:46.499 Latency(us) 00:29:46.499 Device Information : IOPS MiB/s Average min max 00:29:46.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1520.94 380.23 85673.77 48490.86 133132.80 00:29:46.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1609.29 402.32 80827.59 42573.10 126564.45 00:29:46.499 ======================================================== 00:29:46.499 Total : 3130.23 782.56 83182.29 42573.10 133132.80 00:29:46.499 00:29:46.499 10:04:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:46.499 10:04:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.499 10:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:46.499 10:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:29:46.499 10:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:49.784 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=ea6473a2-97ec-47a7-8021-ec0759486f58 00:29:49.785 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb ea6473a2-97ec-47a7-8021-ec0759486f58 00:29:49.785 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=ea6473a2-97ec-47a7-8021-ec0759486f58 00:29:49.785 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:49.785 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:49.785 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:49.785 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:50.043 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:50.043 { 00:29:50.043 "uuid": "ea6473a2-97ec-47a7-8021-ec0759486f58", 00:29:50.043 "name": "lvs_0", 00:29:50.043 "base_bdev": "Nvme0n1", 00:29:50.043 "total_data_clusters": 238234, 00:29:50.043 "free_clusters": 238234, 00:29:50.043 "block_size": 512, 00:29:50.043 "cluster_size": 4194304 00:29:50.043 } 00:29:50.043 ]' 00:29:50.043 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="ea6473a2-97ec-47a7-8021-ec0759486f58") .free_clusters' 00:29:50.043 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:29:50.043 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="ea6473a2-97ec-47a7-8021-ec0759486f58") .cluster_size' 00:29:50.043 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:50.043 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:29:50.043 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:29:50.043 952936 00:29:50.043 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:50.043 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:50.043 10:04:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ea6473a2-97ec-47a7-8021-ec0759486f58 lbd_0 20480 00:29:50.611 10:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=71824f8a-34fd-471b-97a8-a5086eb250fc 00:29:50.611 10:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 71824f8a-34fd-471b-97a8-a5086eb250fc lvs_n_0 00:29:51.178 10:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=470e28f3-d162-4a3f-8a49-33560befd72a 00:29:51.178 10:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 470e28f3-d162-4a3f-8a49-33560befd72a 00:29:51.178 10:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=470e28f3-d162-4a3f-8a49-33560befd72a 00:29:51.178 10:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:51.178 10:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:51.178 10:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:51.178 10:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:51.436 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:51.436 { 00:29:51.436 "uuid": "ea6473a2-97ec-47a7-8021-ec0759486f58", 00:29:51.436 "name": "lvs_0", 00:29:51.436 "base_bdev": "Nvme0n1", 00:29:51.436 "total_data_clusters": 238234, 00:29:51.436 "free_clusters": 233114, 00:29:51.436 "block_size": 512, 00:29:51.436 "cluster_size": 4194304 00:29:51.436 }, 00:29:51.436 { 00:29:51.436 "uuid": "470e28f3-d162-4a3f-8a49-33560befd72a", 00:29:51.436 "name": "lvs_n_0", 00:29:51.436 "base_bdev": "71824f8a-34fd-471b-97a8-a5086eb250fc", 00:29:51.436 "total_data_clusters": 5114, 00:29:51.436 "free_clusters": 5114, 00:29:51.436 "block_size": 512, 00:29:51.436 "cluster_size": 4194304 00:29:51.436 } 00:29:51.436 ]' 00:29:51.436 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="470e28f3-d162-4a3f-8a49-33560befd72a") .free_clusters' 00:29:51.436 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:51.436 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="470e28f3-d162-4a3f-8a49-33560befd72a") .cluster_size' 00:29:51.436 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:51.436 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:51.436 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:51.436 20456 00:29:51.436 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:51.436 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 470e28f3-d162-4a3f-8a49-33560befd72a lbd_nest_0 20456 00:29:51.695 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=161e1d69-9f0c-49fd-a4f4-f8ed9dd099d8 00:29:51.695 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.953 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:51.953 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 161e1d69-9f0c-49fd-a4f4-f8ed9dd099d8 00:29:52.212 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.471 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:52.471 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:52.471 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:52.471 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:52.471 10:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:04.687 Initializing NVMe Controllers 00:30:04.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:04.687 Initialization complete. Launching workers. 00:30:04.687 ======================================================== 00:30:04.687 Latency(us) 00:30:04.687 Device Information : IOPS MiB/s Average min max 00:30:04.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.50 0.02 21117.38 141.42 45667.86 00:30:04.687 ======================================================== 00:30:04.687 Total : 47.50 0.02 21117.38 141.42 45667.86 00:30:04.687 00:30:04.687 10:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:04.687 10:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:14.661 Initializing NVMe Controllers 00:30:14.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:14.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:14.661 Initialization complete. Launching workers. 00:30:14.661 ======================================================== 00:30:14.661 Latency(us) 00:30:14.661 Device Information : IOPS MiB/s Average min max 00:30:14.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 76.50 9.56 13079.53 7751.38 51872.50 00:30:14.661 ======================================================== 00:30:14.661 Total : 76.50 9.56 13079.53 7751.38 51872.50 00:30:14.661 00:30:14.661 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:14.661 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:14.661 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:24.646 Initializing NVMe Controllers 00:30:24.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:24.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:24.646 Initialization complete. Launching workers. 00:30:24.646 ======================================================== 00:30:24.646 Latency(us) 00:30:24.646 Device Information : IOPS MiB/s Average min max 00:30:24.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8488.12 4.14 3770.35 231.44 10073.93 00:30:24.646 ======================================================== 00:30:24.646 Total : 8488.12 4.14 3770.35 231.44 10073.93 00:30:24.646 00:30:24.646 10:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:24.646 10:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:34.624 Initializing NVMe Controllers 00:30:34.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:34.624 Initialization complete. Launching workers. 00:30:34.624 ======================================================== 00:30:34.624 Latency(us) 00:30:34.624 Device Information : IOPS MiB/s Average min max 00:30:34.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3478.69 434.84 9200.55 891.08 22954.82 00:30:34.624 ======================================================== 00:30:34.624 Total : 3478.69 434.84 9200.55 891.08 22954.82 00:30:34.624 00:30:34.624 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:34.624 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:34.624 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:44.600 Initializing NVMe Controllers 00:30:44.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:44.601 Controller IO queue size 128, less than required. 00:30:44.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:44.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:44.601 Initialization complete. Launching workers. 00:30:44.601 ======================================================== 00:30:44.601 Latency(us) 00:30:44.601 Device Information : IOPS MiB/s Average min max 00:30:44.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15635.53 7.63 8186.42 1477.55 48553.22 00:30:44.601 ======================================================== 00:30:44.601 Total : 15635.53 7.63 8186.42 1477.55 48553.22 00:30:44.601 00:30:44.601 10:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:44.601 10:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:54.574 Initializing NVMe Controllers 00:30:54.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:54.574 Controller IO queue size 128, less than required. 00:30:54.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:54.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:54.574 Initialization complete. Launching workers. 00:30:54.574 ======================================================== 00:30:54.574 Latency(us) 00:30:54.574 Device Information : IOPS MiB/s Average min max 00:30:54.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1209.70 151.21 106167.29 15406.95 215271.34 00:30:54.574 ======================================================== 00:30:54.574 Total : 1209.70 151.21 106167.29 15406.95 215271.34 00:30:54.574 00:30:54.574 10:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:54.833 10:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 161e1d69-9f0c-49fd-a4f4-f8ed9dd099d8 00:30:55.401 10:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:55.660 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 71824f8a-34fd-471b-97a8-a5086eb250fc 00:30:55.919 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:56.178 rmmod nvme_tcp 00:30:56.178 rmmod nvme_fabrics 00:30:56.178 rmmod nvme_keyring 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 1383391 ']' 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 1383391 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1383391 ']' 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1383391 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1383391 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1383391' 00:30:56.178 killing process with pid 1383391 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1383391 00:30:56.178 10:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1383391 00:30:57.556 10:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:57.556 10:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:57.556 10:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:57.556 10:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:30:57.556 10:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:57.556 10:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:30:57.556 10:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:30:57.815 10:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.815 10:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.815 10:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.815 10:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.815 10:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.721 10:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.721 00:30:59.721 real 1m33.318s 00:30:59.721 user 5m33.928s 00:30:59.721 sys 0m17.011s 00:30:59.721 10:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:59.721 10:05:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:59.721 ************************************ 00:30:59.721 END TEST nvmf_perf 00:30:59.721 ************************************ 00:30:59.721 10:05:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:59.721 10:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:59.721 10:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:59.721 10:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.721 ************************************ 00:30:59.721 START TEST nvmf_fio_host 00:30:59.721 ************************************ 00:30:59.721 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:59.981 * Looking for test storage... 00:30:59.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:59.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.981 --rc genhtml_branch_coverage=1 00:30:59.981 --rc genhtml_function_coverage=1 00:30:59.981 --rc genhtml_legend=1 00:30:59.981 --rc geninfo_all_blocks=1 00:30:59.981 --rc geninfo_unexecuted_blocks=1 00:30:59.981 00:30:59.981 ' 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:59.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.981 --rc genhtml_branch_coverage=1 00:30:59.981 --rc genhtml_function_coverage=1 00:30:59.981 --rc genhtml_legend=1 00:30:59.981 --rc geninfo_all_blocks=1 00:30:59.981 --rc geninfo_unexecuted_blocks=1 00:30:59.981 00:30:59.981 ' 00:30:59.981 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:59.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.981 --rc genhtml_branch_coverage=1 00:30:59.981 --rc genhtml_function_coverage=1 00:30:59.981 --rc genhtml_legend=1 00:30:59.981 --rc geninfo_all_blocks=1 00:30:59.982 --rc geninfo_unexecuted_blocks=1 00:30:59.982 00:30:59.982 ' 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:59.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.982 --rc genhtml_branch_coverage=1 00:30:59.982 --rc genhtml_function_coverage=1 00:30:59.982 --rc genhtml_legend=1 00:30:59.982 --rc geninfo_all_blocks=1 00:30:59.982 --rc geninfo_unexecuted_blocks=1 00:30:59.982 00:30:59.982 ' 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:59.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.982 10:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:05.261 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:05.261 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:05.262 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:05.262 Found net devices under 0000:86:00.0: cvl_0_0 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:05.262 Found net devices under 0000:86:00.1: cvl_0_1 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:05.262 10:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:05.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:05.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:31:05.521 00:31:05.521 --- 10.0.0.2 ping statistics --- 00:31:05.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.521 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:05.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:05.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:31:05.521 00:31:05.521 --- 10.0.0.1 ping statistics --- 00:31:05.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.521 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.521 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1400424 00:31:05.522 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:05.522 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:05.522 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1400424 00:31:05.522 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1400424 ']' 00:31:05.522 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.522 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:05.522 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.522 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:05.522 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.780 [2024-12-07 10:05:34.278710] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:05.780 [2024-12-07 10:05:34.278761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.780 [2024-12-07 10:05:34.340738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:05.780 [2024-12-07 10:05:34.383811] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.780 [2024-12-07 10:05:34.383853] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.780 [2024-12-07 10:05:34.383861] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:05.780 [2024-12-07 10:05:34.383867] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:05.780 [2024-12-07 10:05:34.383873] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.780 [2024-12-07 10:05:34.383927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.780 [2024-12-07 10:05:34.384034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:05.780 [2024-12-07 10:05:34.384057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:05.780 [2024-12-07 10:05:34.384058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.780 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:05.780 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:31:05.780 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:06.041 [2024-12-07 10:05:34.662607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.041 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:06.041 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:06.041 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.041 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:06.304 Malloc1 00:31:06.304 10:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:06.596 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:06.871 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.871 [2024-12-07 10:05:35.511750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.872 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:07.172 10:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:07.480 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:07.480 fio-3.35 00:31:07.480 Starting 1 thread 00:31:10.054 00:31:10.054 test: (groupid=0, jobs=1): err= 0: pid=1400974: Sat Dec 7 10:05:38 2024 00:31:10.054 read: IOPS=11.5k, BW=45.1MiB/s (47.3MB/s)(90.4MiB/2005msec) 00:31:10.054 slat (nsec): min=1593, max=241140, avg=1725.60, stdev=2233.13 00:31:10.054 clat (usec): min=3178, max=10626, avg=6118.09, stdev=464.39 00:31:10.054 lat (usec): min=3214, max=10627, avg=6119.81, stdev=464.30 00:31:10.054 clat percentiles (usec): 00:31:10.054 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:31:10.054 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:31:10.054 | 70.00th=[ 6325], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:31:10.054 | 99.00th=[ 7111], 99.50th=[ 7242], 99.90th=[ 8455], 99.95th=[ 9634], 00:31:10.054 | 99.99th=[10552] 00:31:10.054 bw ( KiB/s): min=44984, max=46864, per=99.96%, avg=46146.00, stdev=819.66, samples=4 00:31:10.054 iops : min=11246, max=11716, avg=11536.50, stdev=204.92, samples=4 00:31:10.054 write: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(89.8MiB/2005msec); 0 zone resets 00:31:10.054 slat (nsec): min=1632, max=226081, avg=1784.59, stdev=1658.94 00:31:10.054 clat (usec): min=2441, max=9664, avg=4947.27, stdev=384.86 00:31:10.054 lat (usec): min=2456, max=9666, avg=4949.06, stdev=384.81 00:31:10.054 clat percentiles (usec): 00:31:10.054 | 1.00th=[ 4080], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4621], 00:31:10.054 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5014], 00:31:10.054 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5538], 00:31:10.054 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 7701], 99.95th=[ 8586], 00:31:10.054 | 99.99th=[ 9110] 00:31:10.054 bw ( KiB/s): min=45392, max=46400, per=99.98%, avg=45844.00, stdev=435.91, samples=4 00:31:10.054 iops : min=11348, max=11600, avg=11461.00, stdev=108.98, samples=4 00:31:10.054 lat (msec) : 4=0.34%, 10=99.65%, 20=0.01% 00:31:10.054 cpu : usr=72.65%, sys=25.30%, ctx=93, majf=0, minf=4 00:31:10.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:10.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:10.054 issued rwts: total=23139,22983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:10.054 00:31:10.054 Run status group 0 (all jobs): 00:31:10.054 READ: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=90.4MiB (94.8MB), run=2005-2005msec 00:31:10.054 WRITE: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=89.8MiB (94.1MB), run=2005-2005msec 00:31:10.054 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:10.054 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:10.054 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:10.054 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.054 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:10.055 10:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:10.313 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:10.313 fio-3.35 00:31:10.313 Starting 1 thread 00:31:12.857 00:31:12.857 test: (groupid=0, jobs=1): err= 0: pid=1401544: Sat Dec 7 10:05:41 2024 00:31:12.857 read: IOPS=10.4k, BW=163MiB/s (171MB/s)(327MiB/2008msec) 00:31:12.857 slat (nsec): min=2562, max=85452, avg=2827.27, stdev=1263.81 00:31:12.857 clat (usec): min=1653, max=50213, avg=7141.20, stdev=3516.69 00:31:12.857 lat (usec): min=1656, max=50215, avg=7144.03, stdev=3516.76 00:31:12.857 clat percentiles (usec): 00:31:12.857 | 1.00th=[ 3654], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5538], 00:31:12.857 | 30.00th=[ 5997], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7308], 00:31:12.857 | 70.00th=[ 7635], 80.00th=[ 8094], 90.00th=[ 9110], 95.00th=[10159], 00:31:12.857 | 99.00th=[12256], 99.50th=[43779], 99.90th=[49546], 99.95th=[49546], 00:31:12.857 | 99.99th=[50070] 00:31:12.857 bw ( KiB/s): min=76512, max=95360, per=50.96%, avg=85072.00, stdev=7939.98, samples=4 00:31:12.857 iops : min= 4782, max= 5960, avg=5317.00, stdev=496.25, samples=4 00:31:12.857 write: IOPS=6286, BW=98.2MiB/s (103MB/s)(173MiB/1765msec); 0 zone resets 00:31:12.857 slat (usec): min=29, max=318, avg=31.56, stdev= 6.52 00:31:12.857 clat (usec): min=3691, max=15212, avg=8830.59, stdev=1500.92 00:31:12.857 lat (usec): min=3721, max=15242, avg=8862.14, stdev=1502.17 00:31:12.857 clat percentiles (usec): 00:31:12.857 | 1.00th=[ 5669], 5.00th=[ 6652], 10.00th=[ 7111], 20.00th=[ 7570], 00:31:12.857 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9110], 00:31:12.857 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10945], 95.00th=[11469], 00:31:12.857 | 99.00th=[12780], 99.50th=[13304], 99.90th=[14222], 99.95th=[14484], 00:31:12.857 | 99.99th=[14615] 00:31:12.857 bw ( KiB/s): min=80096, max=99200, per=87.92%, avg=88424.00, stdev=8041.81, samples=4 00:31:12.857 iops : min= 5006, max= 6200, avg=5526.50, stdev=502.61, samples=4 00:31:12.857 lat (msec) : 2=0.03%, 4=1.63%, 10=87.79%, 20=10.15%, 50=0.38% 00:31:12.857 lat (msec) : 100=0.02% 00:31:12.857 cpu : usr=84.70%, sys=14.20%, ctx=46, majf=0, minf=4 00:31:12.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:31:12.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:12.857 issued rwts: total=20950,11095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.857 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:12.857 00:31:12.857 Run status group 0 (all jobs): 00:31:12.857 READ: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=327MiB (343MB), run=2008-2008msec 00:31:12.857 WRITE: bw=98.2MiB/s (103MB/s), 98.2MiB/s-98.2MiB/s (103MB/s-103MB/s), io=173MiB (182MB), run=1765-1765msec 00:31:12.857 10:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:12.857 10:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:12.857 10:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:12.857 10:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:12.857 10:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:12.857 10:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:31:12.857 10:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:12.857 10:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:12.857 10:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:12.857 10:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:12.857 10:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:31:12.857 10:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:16.131 Nvme0n1 00:31:16.132 10:05:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:18.650 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=86afe6d1-ea48-4394-add1-b528febb42cc 00:31:18.650 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 86afe6d1-ea48-4394-add1-b528febb42cc 00:31:18.650 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=86afe6d1-ea48-4394-add1-b528febb42cc 00:31:18.650 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:18.650 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:18.650 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:18.650 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:18.907 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:18.907 { 00:31:18.907 "uuid": "86afe6d1-ea48-4394-add1-b528febb42cc", 00:31:18.907 "name": "lvs_0", 00:31:18.907 "base_bdev": "Nvme0n1", 00:31:18.907 "total_data_clusters": 930, 00:31:18.907 "free_clusters": 930, 00:31:18.907 "block_size": 512, 00:31:18.907 "cluster_size": 1073741824 00:31:18.907 } 00:31:18.907 ]' 00:31:18.907 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="86afe6d1-ea48-4394-add1-b528febb42cc") .free_clusters' 00:31:18.907 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:18.907 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="86afe6d1-ea48-4394-add1-b528febb42cc") .cluster_size' 00:31:18.907 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:18.907 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:18.907 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:18.907 952320 00:31:18.907 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:19.163 fa97aac9-83bb-4bc1-a7fc-6b41b331f3d9 00:31:19.421 10:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:19.421 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:19.678 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:19.936 10:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:20.194 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:20.194 fio-3.35 00:31:20.194 Starting 1 thread 00:31:22.719 [2024-12-07 10:05:51.155681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3460 is same with the state(6) to be set 00:31:22.719 00:31:22.719 test: (groupid=0, jobs=1): err= 0: pid=1403295: Sat Dec 7 10:05:51 2024 00:31:22.719 read: IOPS=7845, BW=30.6MiB/s (32.1MB/s)(61.5MiB/2006msec) 00:31:22.719 slat (nsec): min=1582, max=98797, avg=1684.98, stdev=1147.73 00:31:22.719 clat (usec): min=768, max=170137, avg=9004.51, stdev=10392.70 00:31:22.719 lat (usec): min=769, max=170158, avg=9006.20, stdev=10392.87 00:31:22.719 clat percentiles (msec): 00:31:22.719 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:31:22.719 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:31:22.719 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 10], 00:31:22.719 | 99.00th=[ 11], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 171], 00:31:22.719 | 99.99th=[ 171] 00:31:22.719 bw ( KiB/s): min=22520, max=34600, per=99.87%, avg=31342.00, stdev=5885.72, samples=4 00:31:22.719 iops : min= 5630, max= 8650, avg=7835.50, stdev=1471.43, samples=4 00:31:22.719 write: IOPS=7821, BW=30.6MiB/s (32.0MB/s)(61.3MiB/2006msec); 0 zone resets 00:31:22.719 slat (nsec): min=1611, max=411218, avg=1760.94, stdev=3288.73 00:31:22.719 clat (usec): min=200, max=168703, avg=7256.24, stdev=9726.99 00:31:22.719 lat (usec): min=202, max=168708, avg=7258.00, stdev=9727.52 00:31:22.719 clat percentiles (msec): 00:31:22.719 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:31:22.719 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:31:22.719 | 70.00th=[ 7], 80.00th=[ 8], 90.00th=[ 8], 95.00th=[ 8], 00:31:22.719 | 99.00th=[ 9], 99.50th=[ 10], 99.90th=[ 169], 99.95th=[ 169], 00:31:22.719 | 99.99th=[ 169] 00:31:22.719 bw ( KiB/s): min=23400, max=34056, per=99.92%, avg=31260.00, stdev=5241.74, samples=4 00:31:22.719 iops : min= 5850, max= 8514, avg=7815.00, stdev=1310.43, samples=4 00:31:22.719 lat (usec) : 250=0.01%, 500=0.01%, 1000=0.01% 00:31:22.719 lat (msec) : 2=0.04%, 4=0.23%, 10=98.97%, 20=0.34%, 250=0.41% 00:31:22.719 cpu : usr=69.88%, sys=28.68%, ctx=63, majf=0, minf=4 00:31:22.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:22.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:22.719 issued rwts: total=15739,15689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:22.719 00:31:22.719 Run status group 0 (all jobs): 00:31:22.719 READ: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=61.5MiB (64.5MB), run=2006-2006msec 00:31:22.719 WRITE: bw=30.6MiB/s (32.0MB/s), 30.6MiB/s-30.6MiB/s (32.0MB/s-32.0MB/s), io=61.3MiB (64.3MB), run=2006-2006msec 00:31:22.719 10:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:22.719 10:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=e165ef6b-1c0f-4bb2-8592-02063df144fb 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb e165ef6b-1c0f-4bb2-8592-02063df144fb 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=e165ef6b-1c0f-4bb2-8592-02063df144fb 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:24.087 { 00:31:24.087 "uuid": "86afe6d1-ea48-4394-add1-b528febb42cc", 00:31:24.087 "name": "lvs_0", 00:31:24.087 "base_bdev": "Nvme0n1", 00:31:24.087 "total_data_clusters": 930, 00:31:24.087 "free_clusters": 0, 00:31:24.087 "block_size": 512, 00:31:24.087 "cluster_size": 1073741824 00:31:24.087 }, 00:31:24.087 { 00:31:24.087 "uuid": "e165ef6b-1c0f-4bb2-8592-02063df144fb", 00:31:24.087 "name": "lvs_n_0", 00:31:24.087 "base_bdev": "fa97aac9-83bb-4bc1-a7fc-6b41b331f3d9", 00:31:24.087 "total_data_clusters": 237847, 00:31:24.087 "free_clusters": 237847, 00:31:24.087 "block_size": 512, 00:31:24.087 "cluster_size": 4194304 00:31:24.087 } 00:31:24.087 ]' 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e165ef6b-1c0f-4bb2-8592-02063df144fb") .free_clusters' 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e165ef6b-1c0f-4bb2-8592-02063df144fb") .cluster_size' 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:24.087 951388 00:31:24.087 10:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:24.649 3dd8e827-edd0-46f6-8d34-c640b1851a61 00:31:24.649 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:24.906 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:25.164 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:25.434 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:25.434 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:25.434 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.434 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:25.434 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:25.434 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:25.434 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:25.434 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:25.434 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:25.434 10:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:25.694 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:25.694 fio-3.35 00:31:25.694 Starting 1 thread 00:31:28.215 00:31:28.215 test: (groupid=0, jobs=1): err= 0: pid=1404201: Sat Dec 7 10:05:56 2024 00:31:28.215 read: IOPS=7576, BW=29.6MiB/s (31.0MB/s)(59.4MiB/2007msec) 00:31:28.215 slat (nsec): min=1556, max=92275, avg=1682.58, stdev=1060.26 00:31:28.215 clat (usec): min=2849, max=15435, avg=9327.34, stdev=782.64 00:31:28.215 lat (usec): min=2863, max=15437, avg=9329.02, stdev=782.56 00:31:28.215 clat percentiles (usec): 00:31:28.215 | 1.00th=[ 7570], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8717], 00:31:28.215 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:31:28.215 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:31:28.215 | 99.00th=[10945], 99.50th=[11207], 99.90th=[14222], 99.95th=[15139], 00:31:28.215 | 99.99th=[15401] 00:31:28.215 bw ( KiB/s): min=28990, max=30992, per=99.85%, avg=30263.50, stdev=908.53, samples=4 00:31:28.215 iops : min= 7247, max= 7748, avg=7565.75, stdev=227.37, samples=4 00:31:28.215 write: IOPS=7567, BW=29.6MiB/s (31.0MB/s)(59.3MiB/2007msec); 0 zone resets 00:31:28.215 slat (nsec): min=1588, max=82642, avg=1747.64, stdev=737.85 00:31:28.215 clat (usec): min=1973, max=14235, avg=7466.28, stdev=670.20 00:31:28.215 lat (usec): min=1978, max=14237, avg=7468.03, stdev=670.17 00:31:28.215 clat percentiles (usec): 00:31:28.215 | 1.00th=[ 5932], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6980], 00:31:28.215 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7635], 00:31:28.215 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8455], 00:31:28.215 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11600], 99.95th=[12911], 00:31:28.215 | 99.99th=[13173] 00:31:28.215 bw ( KiB/s): min=30019, max=30592, per=99.91%, avg=30240.75, stdev=257.79, samples=4 00:31:28.215 iops : min= 7504, max= 7648, avg=7560.00, stdev=64.66, samples=4 00:31:28.215 lat (msec) : 2=0.01%, 4=0.11%, 10=90.88%, 20=9.01% 00:31:28.215 cpu : usr=69.74%, sys=29.21%, ctx=77, majf=0, minf=4 00:31:28.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:28.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:28.215 issued rwts: total=15207,15187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:28.215 00:31:28.215 Run status group 0 (all jobs): 00:31:28.215 READ: bw=29.6MiB/s (31.0MB/s), 29.6MiB/s-29.6MiB/s (31.0MB/s-31.0MB/s), io=59.4MiB (62.3MB), run=2007-2007msec 00:31:28.215 WRITE: bw=29.6MiB/s (31.0MB/s), 29.6MiB/s-29.6MiB/s (31.0MB/s-31.0MB/s), io=59.3MiB (62.2MB), run=2007-2007msec 00:31:28.215 10:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:28.215 10:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:28.215 10:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:32.388 10:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:32.388 10:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:34.903 10:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:35.159 10:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.060 rmmod nvme_tcp 00:31:37.060 rmmod nvme_fabrics 00:31:37.060 rmmod nvme_keyring 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 1400424 ']' 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 1400424 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1400424 ']' 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1400424 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1400424 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1400424' 00:31:37.060 killing process with pid 1400424 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1400424 00:31:37.060 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1400424 00:31:37.318 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:37.318 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:37.318 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:37.318 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:37.318 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:31:37.318 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:37.318 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:31:37.318 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.318 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.318 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.318 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.318 10:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.847 10:06:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:39.847 00:31:39.847 real 0m39.570s 00:31:39.847 user 2m38.925s 00:31:39.847 sys 0m8.750s 00:31:39.847 10:06:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:39.847 10:06:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.847 ************************************ 00:31:39.847 END TEST nvmf_fio_host 00:31:39.847 ************************************ 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.847 ************************************ 00:31:39.847 START TEST nvmf_failover 00:31:39.847 ************************************ 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:39.847 * Looking for test storage... 00:31:39.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:39.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.847 --rc genhtml_branch_coverage=1 00:31:39.847 --rc genhtml_function_coverage=1 00:31:39.847 --rc genhtml_legend=1 00:31:39.847 --rc geninfo_all_blocks=1 00:31:39.847 --rc geninfo_unexecuted_blocks=1 00:31:39.847 00:31:39.847 ' 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:39.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.847 --rc genhtml_branch_coverage=1 00:31:39.847 --rc genhtml_function_coverage=1 00:31:39.847 --rc genhtml_legend=1 00:31:39.847 --rc geninfo_all_blocks=1 00:31:39.847 --rc geninfo_unexecuted_blocks=1 00:31:39.847 00:31:39.847 ' 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:39.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.847 --rc genhtml_branch_coverage=1 00:31:39.847 --rc genhtml_function_coverage=1 00:31:39.847 --rc genhtml_legend=1 00:31:39.847 --rc geninfo_all_blocks=1 00:31:39.847 --rc geninfo_unexecuted_blocks=1 00:31:39.847 00:31:39.847 ' 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:39.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.847 --rc genhtml_branch_coverage=1 00:31:39.847 --rc genhtml_function_coverage=1 00:31:39.847 --rc genhtml_legend=1 00:31:39.847 --rc geninfo_all_blocks=1 00:31:39.847 --rc geninfo_unexecuted_blocks=1 00:31:39.847 00:31:39.847 ' 00:31:39.847 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:39.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:39.848 10:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:45.117 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:45.118 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:45.118 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:45.118 Found net devices under 0000:86:00.0: cvl_0_0 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:45.118 Found net devices under 0000:86:00.1: cvl_0_1 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:45.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:45.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:31:45.118 00:31:45.118 --- 10.0.0.2 ping statistics --- 00:31:45.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.118 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:45.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:45.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:31:45.118 00:31:45.118 --- 10.0.0.1 ping statistics --- 00:31:45.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.118 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:45.118 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=1409976 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 1409976 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1409976 ']' 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:45.119 10:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:45.119 [2024-12-07 10:06:13.837057] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:45.119 [2024-12-07 10:06:13.837103] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:45.378 [2024-12-07 10:06:13.894742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:45.378 [2024-12-07 10:06:13.936147] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:45.378 [2024-12-07 10:06:13.936186] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:45.378 [2024-12-07 10:06:13.936193] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:45.378 [2024-12-07 10:06:13.936200] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:45.378 [2024-12-07 10:06:13.936205] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:45.378 [2024-12-07 10:06:13.936308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:45.378 [2024-12-07 10:06:13.936394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:45.378 [2024-12-07 10:06:13.936395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:45.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:45.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:45.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:45.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:45.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:45.636 [2024-12-07 10:06:14.236483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.637 10:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:45.894 Malloc0 00:31:45.894 10:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:46.153 10:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:46.411 10:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.411 [2024-12-07 10:06:15.079207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.411 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:46.668 [2024-12-07 10:06:15.283789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:46.668 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:46.926 [2024-12-07 10:06:15.484432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:46.926 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1410238 00:31:46.926 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:46.926 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:46.926 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1410238 /var/tmp/bdevperf.sock 00:31:46.926 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1410238 ']' 00:31:46.926 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:46.926 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:46.926 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:46.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:46.926 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:46.926 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:47.184 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:47.184 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:47.184 10:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:47.441 NVMe0n1 00:31:47.441 10:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:48.004 00:31:48.004 10:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1410465 00:31:48.004 10:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:48.004 10:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:48.932 10:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.189 [2024-12-07 10:06:17.711940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.711999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.189 [2024-12-07 10:06:17.712137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 [2024-12-07 10:06:17.712379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce65d0 is same with the state(6) to be set 00:31:49.190 10:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:52.469 10:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:52.469 00:31:52.469 10:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:52.728 [2024-12-07 10:06:21.365251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.728 [2024-12-07 10:06:21.365291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.728 [2024-12-07 10:06:21.365298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.728 [2024-12-07 10:06:21.365305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.728 [2024-12-07 10:06:21.365311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.728 [2024-12-07 10:06:21.365317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 [2024-12-07 10:06:21.365651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7b60 is same with the state(6) to be set 00:31:52.729 10:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:56.013 10:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.013 [2024-12-07 10:06:24.581269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.013 10:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:56.944 10:06:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:57.200 [2024-12-07 10:06:25.784200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 [2024-12-07 10:06:25.784235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 [2024-12-07 10:06:25.784243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 [2024-12-07 10:06:25.784249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 [2024-12-07 10:06:25.784261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 [2024-12-07 10:06:25.784268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 [2024-12-07 10:06:25.784275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 [2024-12-07 10:06:25.784281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 [2024-12-07 10:06:25.784288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 [2024-12-07 10:06:25.784294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 [2024-12-07 10:06:25.784300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 [2024-12-07 10:06:25.784306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 [2024-12-07 10:06:25.784312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8ef0 is same with the state(6) to be set 00:31:57.200 10:06:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1410465 00:32:03.752 { 00:32:03.752 "results": [ 00:32:03.752 { 00:32:03.752 "job": "NVMe0n1", 00:32:03.752 "core_mask": "0x1", 00:32:03.752 "workload": "verify", 00:32:03.752 "status": "finished", 00:32:03.752 "verify_range": { 00:32:03.752 "start": 0, 00:32:03.752 "length": 16384 00:32:03.752 }, 00:32:03.752 "queue_depth": 128, 00:32:03.752 "io_size": 4096, 00:32:03.752 "runtime": 15.002204, 00:32:03.752 "iops": 10662.300019383818, 00:32:03.752 "mibps": 41.64960945071804, 00:32:03.752 "io_failed": 8197, 00:32:03.752 "io_timeout": 0, 00:32:03.752 "avg_latency_us": 11397.386346727204, 00:32:03.752 "min_latency_us": 436.31304347826085, 00:32:03.752 "max_latency_us": 22909.106086956523 00:32:03.752 } 00:32:03.752 ], 00:32:03.752 "core_count": 1 00:32:03.752 } 00:32:03.752 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1410238 00:32:03.752 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1410238 ']' 00:32:03.752 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1410238 00:32:03.752 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:03.752 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:03.752 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1410238 00:32:03.752 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:03.752 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:03.752 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1410238' 00:32:03.752 killing process with pid 1410238 00:32:03.752 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1410238 00:32:03.752 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1410238 00:32:03.753 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:03.753 [2024-12-07 10:06:15.560436] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:03.753 [2024-12-07 10:06:15.560487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410238 ] 00:32:03.753 [2024-12-07 10:06:15.614068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.753 [2024-12-07 10:06:15.656285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.753 Running I/O for 15 seconds... 00:32:03.753 10602.00 IOPS, 41.41 MiB/s [2024-12-07T09:06:32.479Z] [2024-12-07 10:06:17.713321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.753 [2024-12-07 10:06:17.713688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.753 [2024-12-07 10:06:17.713704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.753 [2024-12-07 10:06:17.713720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.753 [2024-12-07 10:06:17.713738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.753 [2024-12-07 10:06:17.713753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.753 [2024-12-07 10:06:17.713768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.753 [2024-12-07 10:06:17.713776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.713987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.713995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.714002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.714017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.754 [2024-12-07 10:06:17.714032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.754 [2024-12-07 10:06:17.714048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.754 [2024-12-07 10:06:17.714064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.714078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.714093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.714109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.714126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.714141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.714158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.714173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.714188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.714203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.754 [2024-12-07 10:06:17.714218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.754 [2024-12-07 10:06:17.714226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.755 [2024-12-07 10:06:17.714674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.755 [2024-12-07 10:06:17.714681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.714987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.714994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.715001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.715009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.715015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.715024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.756 [2024-12-07 10:06:17.715031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.715052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.756 [2024-12-07 10:06:17.715059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94752 len:8 PRP1 0x0 PRP2 0x0 00:32:03.756 [2024-12-07 10:06:17.715067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.715077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.756 [2024-12-07 10:06:17.715083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.756 [2024-12-07 10:06:17.715089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94760 len:8 PRP1 0x0 PRP2 0x0 00:32:03.756 [2024-12-07 10:06:17.715095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.715102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.756 [2024-12-07 10:06:17.715107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.756 [2024-12-07 10:06:17.715113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94768 len:8 PRP1 0x0 PRP2 0x0 00:32:03.756 [2024-12-07 10:06:17.715124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.756 [2024-12-07 10:06:17.715131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.756 [2024-12-07 10:06:17.715136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.756 [2024-12-07 10:06:17.715142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94776 len:8 PRP1 0x0 PRP2 0x0 00:32:03.756 [2024-12-07 10:06:17.715148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.715155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.715160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.715165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94784 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.715172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.715179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.715184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.715190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94792 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.715196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.715203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.715208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.715213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94800 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.715219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.715226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.715232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.715238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94808 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.715244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.715250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.715255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.715261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94816 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.715267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.715274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.715279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.715285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94824 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.715291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.715298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.715304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.715311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94832 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.715319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.715326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.715332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.715338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94840 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.715344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.715351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.715355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.715361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94848 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.715367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.715374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.715380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.715385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94024 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.715392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.715399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.715404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.715409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94032 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.715416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.715422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.715429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.715434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94040 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.715440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.727285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.727295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.727302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94048 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.727310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.727317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.727322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.727328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94056 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.727334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.727344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.727350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.727358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94064 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.727367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.727374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.757 [2024-12-07 10:06:17.727380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.757 [2024-12-07 10:06:17.727387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94072 len:8 PRP1 0x0 PRP2 0x0 00:32:03.757 [2024-12-07 10:06:17.727393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.727436] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf476c0 was disconnected and freed. reset controller. 00:32:03.757 [2024-12-07 10:06:17.727446] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:03.757 [2024-12-07 10:06:17.727468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.757 [2024-12-07 10:06:17.727477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.757 [2024-12-07 10:06:17.727485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.758 [2024-12-07 10:06:17.727493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:17.727501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.758 [2024-12-07 10:06:17.727508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:17.727515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.758 [2024-12-07 10:06:17.727522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:17.727530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:03.758 [2024-12-07 10:06:17.727576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf28fd0 (9): Bad file descriptor 00:32:03.758 [2024-12-07 10:06:17.731449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:03.758 [2024-12-07 10:06:17.805034] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:03.758 10328.50 IOPS, 40.35 MiB/s [2024-12-07T09:06:32.484Z] 10464.33 IOPS, 40.88 MiB/s [2024-12-07T09:06:32.484Z] 10523.25 IOPS, 41.11 MiB/s [2024-12-07T09:06:32.484Z] [2024-12-07 10:06:21.366152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.758 [2024-12-07 10:06:21.366426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.758 [2024-12-07 10:06:21.366435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.759 [2024-12-07 10:06:21.366885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.759 [2024-12-07 10:06:21.366894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.366902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.760 [2024-12-07 10:06:21.366909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.366917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.760 [2024-12-07 10:06:21.366924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.366932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.760 [2024-12-07 10:06:21.366939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.366952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.760 [2024-12-07 10:06:21.366959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.366968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.760 [2024-12-07 10:06:21.366975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.366983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.760 [2024-12-07 10:06:21.366990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.366998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.760 [2024-12-07 10:06:21.367005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.760 [2024-12-07 10:06:21.367023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.760 [2024-12-07 10:06:21.367038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.760 [2024-12-07 10:06:21.367053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.760 [2024-12-07 10:06:21.367068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.760 [2024-12-07 10:06:21.367083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.760 [2024-12-07 10:06:21.367344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.760 [2024-12-07 10:06:21.367352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.761 [2024-12-07 10:06:21.367817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.761 [2024-12-07 10:06:21.367848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.761 [2024-12-07 10:06:21.367856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36112 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.367864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.367874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.367879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.367885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36120 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.367891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.367898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.367904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.367909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36128 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.367916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.367923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.367928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.367935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36136 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.367941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.367953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.367958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.367967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36144 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.367975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.367982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.367987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.367993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36152 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.367999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.368006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.368011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.368019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36160 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.368025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.368032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.368038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.368044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36168 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.368050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.368057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.368062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.368067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36176 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.368073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.368081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.368086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.368092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36184 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.368099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.368106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.368110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.368116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36192 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.368122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.368128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.368135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.368142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36200 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.368148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.368155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.368160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.368167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36208 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.368173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.368180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.368185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.368191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36216 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.368197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.368205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.368210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.368216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36224 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.368222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.368229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.368233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.368239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36232 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.368247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.368253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.368259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.368264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36240 len:8 PRP1 0x0 PRP2 0x0 00:32:03.762 [2024-12-07 10:06:21.368270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.762 [2024-12-07 10:06:21.379537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.762 [2024-12-07 10:06:21.379550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.762 [2024-12-07 10:06:21.379559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36248 len:8 PRP1 0x0 PRP2 0x0 00:32:03.763 [2024-12-07 10:06:21.379568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:21.379577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.763 [2024-12-07 10:06:21.379584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.763 [2024-12-07 10:06:21.379591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36256 len:8 PRP1 0x0 PRP2 0x0 00:32:03.763 [2024-12-07 10:06:21.379600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:21.379609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.763 [2024-12-07 10:06:21.379616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.763 [2024-12-07 10:06:21.379625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36264 len:8 PRP1 0x0 PRP2 0x0 00:32:03.763 [2024-12-07 10:06:21.379636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:21.379645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.763 [2024-12-07 10:06:21.379652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.763 [2024-12-07 10:06:21.379662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35712 len:8 PRP1 0x0 PRP2 0x0 00:32:03.763 [2024-12-07 10:06:21.379671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:21.379681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.763 [2024-12-07 10:06:21.379688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.763 [2024-12-07 10:06:21.379696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35720 len:8 PRP1 0x0 PRP2 0x0 00:32:03.763 [2024-12-07 10:06:21.379705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:21.379756] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf499e0 was disconnected and freed. reset controller. 00:32:03.763 [2024-12-07 10:06:21.379769] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:03.763 [2024-12-07 10:06:21.379797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.763 [2024-12-07 10:06:21.379808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:21.379819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.763 [2024-12-07 10:06:21.379828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:21.379838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.763 [2024-12-07 10:06:21.379847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:21.379857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.763 [2024-12-07 10:06:21.379867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:21.379876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:03.763 [2024-12-07 10:06:21.379914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf28fd0 (9): Bad file descriptor 00:32:03.763 [2024-12-07 10:06:21.383790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:03.763 [2024-12-07 10:06:21.461785] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:03.763 10419.60 IOPS, 40.70 MiB/s [2024-12-07T09:06:32.489Z] 10501.50 IOPS, 41.02 MiB/s [2024-12-07T09:06:32.489Z] 10533.86 IOPS, 41.15 MiB/s [2024-12-07T09:06:32.489Z] 10577.00 IOPS, 41.32 MiB/s [2024-12-07T09:06:32.489Z] 10593.56 IOPS, 41.38 MiB/s [2024-12-07T09:06:32.489Z] [2024-12-07 10:06:25.784887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.763 [2024-12-07 10:06:25.784929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.784945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.763 [2024-12-07 10:06:25.784959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.784968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.763 [2024-12-07 10:06:25.784976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.784984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.763 [2024-12-07 10:06:25.784991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.785000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.763 [2024-12-07 10:06:25.785007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.785016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.763 [2024-12-07 10:06:25.785027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.785039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.763 [2024-12-07 10:06:25.785046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.785054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.763 [2024-12-07 10:06:25.785061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.785070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.763 [2024-12-07 10:06:25.785077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.785085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.763 [2024-12-07 10:06:25.785092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.785100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.763 [2024-12-07 10:06:25.785107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.785115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.763 [2024-12-07 10:06:25.785123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.785131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.763 [2024-12-07 10:06:25.785138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.785146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.763 [2024-12-07 10:06:25.785153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.785161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.763 [2024-12-07 10:06:25.785170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.763 [2024-12-07 10:06:25.785178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.763 [2024-12-07 10:06:25.785185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.764 [2024-12-07 10:06:25.785215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.764 [2024-12-07 10:06:25.785232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.764 [2024-12-07 10:06:25.785248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.764 [2024-12-07 10:06:25.785263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.764 [2024-12-07 10:06:25.785277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.764 [2024-12-07 10:06:25.785540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.764 [2024-12-07 10:06:25.785550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.785990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.785997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.786005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.786013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.786022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.765 [2024-12-07 10:06:25.786029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.765 [2024-12-07 10:06:25.786037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.766 [2024-12-07 10:06:25.786519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.766 [2024-12-07 10:06:25.786551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43920 len:8 PRP1 0x0 PRP2 0x0 00:32:03.766 [2024-12-07 10:06:25.786558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.766 [2024-12-07 10:06:25.786569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.766 [2024-12-07 10:06:25.786575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43928 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43936 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43944 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43952 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43960 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43968 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43976 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43984 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43992 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44000 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44008 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44016 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44024 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44032 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44040 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44048 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.786975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.786982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.786988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44056 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.786995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.787001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.787006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.787015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44064 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.787022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.787029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.787034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.767 [2024-12-07 10:06:25.787040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44072 len:8 PRP1 0x0 PRP2 0x0 00:32:03.767 [2024-12-07 10:06:25.787047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.767 [2024-12-07 10:06:25.787054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.767 [2024-12-07 10:06:25.787059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.768 [2024-12-07 10:06:25.787064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44080 len:8 PRP1 0x0 PRP2 0x0 00:32:03.768 [2024-12-07 10:06:25.787070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.768 [2024-12-07 10:06:25.787077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.768 [2024-12-07 10:06:25.787083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.768 [2024-12-07 10:06:25.787090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44088 len:8 PRP1 0x0 PRP2 0x0 00:32:03.768 [2024-12-07 10:06:25.787096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.768 [2024-12-07 10:06:25.787104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.768 [2024-12-07 10:06:25.787109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.768 [2024-12-07 10:06:25.787114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44096 len:8 PRP1 0x0 PRP2 0x0 00:32:03.768 [2024-12-07 10:06:25.787121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.768 [2024-12-07 10:06:25.787127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.768 [2024-12-07 10:06:25.787132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.768 [2024-12-07 10:06:25.787137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44104 len:8 PRP1 0x0 PRP2 0x0 00:32:03.768 [2024-12-07 10:06:25.787144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.768 [2024-12-07 10:06:25.787152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.768 [2024-12-07 10:06:25.787157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.768 [2024-12-07 10:06:25.787163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44112 len:8 PRP1 0x0 PRP2 0x0 00:32:03.768 [2024-12-07 10:06:25.787169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.768 [2024-12-07 10:06:25.787175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:03.768 [2024-12-07 10:06:25.787182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:03.768 [2024-12-07 10:06:25.797471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44120 len:8 PRP1 0x0 PRP2 0x0 00:32:03.768 [2024-12-07 10:06:25.797483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.768 [2024-12-07 10:06:25.797529] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf64430 was disconnected and freed. reset controller. 00:32:03.768 [2024-12-07 10:06:25.797538] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:03.768 [2024-12-07 10:06:25.797561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.768 [2024-12-07 10:06:25.797568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.768 [2024-12-07 10:06:25.797577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.768 [2024-12-07 10:06:25.797584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.768 [2024-12-07 10:06:25.797591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.768 [2024-12-07 10:06:25.797598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.768 [2024-12-07 10:06:25.797605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.768 [2024-12-07 10:06:25.797611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.768 [2024-12-07 10:06:25.797618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:03.768 [2024-12-07 10:06:25.797640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf28fd0 (9): Bad file descriptor 00:32:03.768 [2024-12-07 10:06:25.801448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:03.768 [2024-12-07 10:06:25.837079] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:03.768 10569.90 IOPS, 41.29 MiB/s [2024-12-07T09:06:32.494Z] 10598.09 IOPS, 41.40 MiB/s [2024-12-07T09:06:32.494Z] 10622.08 IOPS, 41.49 MiB/s [2024-12-07T09:06:32.494Z] 10644.92 IOPS, 41.58 MiB/s [2024-12-07T09:06:32.494Z] 10649.43 IOPS, 41.60 MiB/s 00:32:03.768 Latency(us) 00:32:03.768 [2024-12-07T09:06:32.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.768 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:03.768 Verification LBA range: start 0x0 length 0x4000 00:32:03.768 NVMe0n1 : 15.00 10662.30 41.65 546.39 0.00 11397.39 436.31 22909.11 00:32:03.768 [2024-12-07T09:06:32.494Z] =================================================================================================================== 00:32:03.768 [2024-12-07T09:06:32.494Z] Total : 10662.30 41.65 546.39 0.00 11397.39 436.31 22909.11 00:32:03.768 Received shutdown signal, test time was about 15.000000 seconds 00:32:03.768 00:32:03.768 Latency(us) 00:32:03.768 [2024-12-07T09:06:32.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.768 [2024-12-07T09:06:32.494Z] =================================================================================================================== 00:32:03.768 [2024-12-07T09:06:32.494Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:03.768 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:03.768 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:03.768 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:03.768 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1412939 00:32:03.768 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:03.768 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1412939 /var/tmp/bdevperf.sock 00:32:03.768 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1412939 ']' 00:32:03.768 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:03.768 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:03.768 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:03.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:03.768 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:03.768 10:06:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:03.768 10:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:03.768 10:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:03.768 10:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:03.768 [2024-12-07 10:06:32.325211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:03.768 10:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:04.027 [2024-12-07 10:06:32.521760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:04.027 10:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:04.285 NVMe0n1 00:32:04.285 10:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:04.542 00:32:04.543 10:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:04.800 00:32:04.800 10:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:04.800 10:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:05.058 10:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:05.316 10:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:08.591 10:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:08.591 10:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:08.591 10:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:08.591 10:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1413688 00:32:08.591 10:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1413688 00:32:09.524 { 00:32:09.524 "results": [ 00:32:09.524 { 00:32:09.524 "job": "NVMe0n1", 00:32:09.524 "core_mask": "0x1", 00:32:09.524 "workload": "verify", 00:32:09.524 "status": "finished", 00:32:09.524 "verify_range": { 00:32:09.524 "start": 0, 00:32:09.524 "length": 16384 00:32:09.524 }, 00:32:09.524 "queue_depth": 128, 00:32:09.524 "io_size": 4096, 00:32:09.524 "runtime": 1.004603, 00:32:09.524 "iops": 10679.840693288792, 00:32:09.524 "mibps": 41.718127708159344, 00:32:09.524 "io_failed": 0, 00:32:09.524 "io_timeout": 0, 00:32:09.524 "avg_latency_us": 11941.295286971112, 00:32:09.524 "min_latency_us": 1602.7826086956522, 00:32:09.524 "max_latency_us": 13107.2 00:32:09.524 } 00:32:09.524 ], 00:32:09.524 "core_count": 1 00:32:09.524 } 00:32:09.524 10:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:09.524 [2024-12-07 10:06:31.967331] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:09.524 [2024-12-07 10:06:31.967385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412939 ] 00:32:09.524 [2024-12-07 10:06:32.022740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.524 [2024-12-07 10:06:32.061819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.524 [2024-12-07 10:06:33.862880] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:09.524 [2024-12-07 10:06:33.862929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.524 [2024-12-07 10:06:33.862940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.524 [2024-12-07 10:06:33.862956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.524 [2024-12-07 10:06:33.862964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.524 [2024-12-07 10:06:33.862972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.524 [2024-12-07 10:06:33.862979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.524 [2024-12-07 10:06:33.862987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.524 [2024-12-07 10:06:33.862993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.524 [2024-12-07 10:06:33.863006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:09.524 [2024-12-07 10:06:33.863031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:09.524 [2024-12-07 10:06:33.863046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112afd0 (9): Bad file descriptor 00:32:09.524 [2024-12-07 10:06:33.914940] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:09.524 Running I/O for 1 seconds... 00:32:09.524 10601.00 IOPS, 41.41 MiB/s 00:32:09.524 Latency(us) 00:32:09.524 [2024-12-07T09:06:38.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.524 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:09.524 Verification LBA range: start 0x0 length 0x4000 00:32:09.524 NVMe0n1 : 1.00 10679.84 41.72 0.00 0.00 11941.30 1602.78 13107.20 00:32:09.524 [2024-12-07T09:06:38.250Z] =================================================================================================================== 00:32:09.524 [2024-12-07T09:06:38.250Z] Total : 10679.84 41.72 0.00 0.00 11941.30 1602.78 13107.20 00:32:09.524 10:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:09.524 10:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:09.781 10:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:10.038 10:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:10.038 10:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:10.295 10:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:10.553 10:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1412939 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1412939 ']' 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1412939 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412939 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412939' 00:32:13.836 killing process with pid 1412939 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1412939 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1412939 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:13.836 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:14.095 rmmod nvme_tcp 00:32:14.095 rmmod nvme_fabrics 00:32:14.095 rmmod nvme_keyring 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 1409976 ']' 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 1409976 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1409976 ']' 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1409976 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409976 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409976' 00:32:14.095 killing process with pid 1409976 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1409976 00:32:14.095 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1409976 00:32:14.356 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:14.357 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:14.357 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:14.357 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:14.357 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:32:14.357 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:14.357 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:32:14.357 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:14.357 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:14.357 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.357 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:14.357 10:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:16.891 00:32:16.891 real 0m36.982s 00:32:16.891 user 1m58.324s 00:32:16.891 sys 0m7.590s 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:16.891 ************************************ 00:32:16.891 END TEST nvmf_failover 00:32:16.891 ************************************ 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.891 ************************************ 00:32:16.891 START TEST nvmf_host_discovery 00:32:16.891 ************************************ 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:16.891 * Looking for test storage... 00:32:16.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:16.891 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:16.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.892 --rc genhtml_branch_coverage=1 00:32:16.892 --rc genhtml_function_coverage=1 00:32:16.892 --rc genhtml_legend=1 00:32:16.892 --rc geninfo_all_blocks=1 00:32:16.892 --rc geninfo_unexecuted_blocks=1 00:32:16.892 00:32:16.892 ' 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:16.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.892 --rc genhtml_branch_coverage=1 00:32:16.892 --rc genhtml_function_coverage=1 00:32:16.892 --rc genhtml_legend=1 00:32:16.892 --rc geninfo_all_blocks=1 00:32:16.892 --rc geninfo_unexecuted_blocks=1 00:32:16.892 00:32:16.892 ' 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:16.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.892 --rc genhtml_branch_coverage=1 00:32:16.892 --rc genhtml_function_coverage=1 00:32:16.892 --rc genhtml_legend=1 00:32:16.892 --rc geninfo_all_blocks=1 00:32:16.892 --rc geninfo_unexecuted_blocks=1 00:32:16.892 00:32:16.892 ' 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:16.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.892 --rc genhtml_branch_coverage=1 00:32:16.892 --rc genhtml_function_coverage=1 00:32:16.892 --rc genhtml_legend=1 00:32:16.892 --rc geninfo_all_blocks=1 00:32:16.892 --rc geninfo_unexecuted_blocks=1 00:32:16.892 00:32:16.892 ' 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.892 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:16.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:16.893 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.294 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.294 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:22.294 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:22.294 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:22.294 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:22.294 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:22.294 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:22.294 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:22.294 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:22.295 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:22.295 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:22.295 Found net devices under 0000:86:00.0: cvl_0_0 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:22.295 Found net devices under 0000:86:00.1: cvl_0_1 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:22.295 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:22.296 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:22.296 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:22.296 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:22.296 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:22.296 10:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:22.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:32:22.555 00:32:22.555 --- 10.0.0.2 ping statistics --- 00:32:22.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.555 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:32:22.555 00:32:22.555 --- 10.0.0.1 ping statistics --- 00:32:22.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.555 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=1418134 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 1418134 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1418134 ']' 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:22.555 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.555 [2024-12-07 10:06:51.234789] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:22.555 [2024-12-07 10:06:51.234843] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.814 [2024-12-07 10:06:51.295376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.814 [2024-12-07 10:06:51.337380] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.814 [2024-12-07 10:06:51.337419] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.814 [2024-12-07 10:06:51.337428] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:22.814 [2024-12-07 10:06:51.337434] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:22.814 [2024-12-07 10:06:51.337439] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.814 [2024-12-07 10:06:51.337457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.814 [2024-12-07 10:06:51.467421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.814 [2024-12-07 10:06:51.479614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.814 null0 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.814 null1 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1418164 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1418164 /tmp/host.sock 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1418164 ']' 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:22.814 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:22.814 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.073 [2024-12-07 10:06:51.561357] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:23.073 [2024-12-07 10:06:51.561400] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418164 ] 00:32:23.073 [2024-12-07 10:06:51.615609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.073 [2024-12-07 10:06:51.657223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.073 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:23.073 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:23.073 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:23.074 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:23.332 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.333 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.333 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:23.333 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:23.333 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:23.333 10:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.333 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:23.333 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:23.333 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.333 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:23.333 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.333 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:23.333 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.333 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:23.333 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.592 [2024-12-07 10:06:52.073168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.592 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:32:23.593 10:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:24.160 [2024-12-07 10:06:52.814462] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:24.160 [2024-12-07 10:06:52.814483] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:24.160 [2024-12-07 10:06:52.814496] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:24.419 [2024-12-07 10:06:52.902758] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:24.419 [2024-12-07 10:06:53.087890] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:24.419 [2024-12-07 10:06:53.087911] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:24.678 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:24.679 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:24.938 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.939 [2024-12-07 10:06:53.597493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:24.939 [2024-12-07 10:06:53.598105] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:24.939 [2024-12-07 10:06:53.598137] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.939 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:25.198 [2024-12-07 10:06:53.685386] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:25.198 10:06:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:25.198 [2024-12-07 10:06:53.749182] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:25.198 [2024-12-07 10:06:53.749200] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:25.198 [2024-12-07 10:06:53.749205] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.135 [2024-12-07 10:06:54.853176] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:26.135 [2024-12-07 10:06:54.853199] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.135 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:26.396 [2024-12-07 10:06:54.860262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.396 [2024-12-07 10:06:54.860283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.396 [2024-12-07 10:06:54.860293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.396 [2024-12-07 10:06:54.860301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.396 [2024-12-07 10:06:54.860309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.396 [2024-12-07 10:06:54.860317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.396 [2024-12-07 10:06:54.860325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.396 [2024-12-07 10:06:54.860332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.396 [2024-12-07 10:06:54.860340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bac30 is same with the state(6) to be set 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:26.396 [2024-12-07 10:06:54.870274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bac30 (9): Bad file descriptor 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.396 [2024-12-07 10:06:54.880312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:26.396 [2024-12-07 10:06:54.880591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.396 [2024-12-07 10:06:54.880607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8bac30 with addr=10.0.0.2, port=4420 00:32:26.396 [2024-12-07 10:06:54.880620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bac30 is same with the state(6) to be set 00:32:26.396 [2024-12-07 10:06:54.880634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bac30 (9): Bad file descriptor 00:32:26.396 [2024-12-07 10:06:54.880645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:26.396 [2024-12-07 10:06:54.880652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:26.396 [2024-12-07 10:06:54.880661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:26.396 [2024-12-07 10:06:54.880672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.396 [2024-12-07 10:06:54.890368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:26.396 [2024-12-07 10:06:54.890496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.396 [2024-12-07 10:06:54.890509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8bac30 with addr=10.0.0.2, port=4420 00:32:26.396 [2024-12-07 10:06:54.890517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bac30 is same with the state(6) to be set 00:32:26.396 [2024-12-07 10:06:54.890527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bac30 (9): Bad file descriptor 00:32:26.396 [2024-12-07 10:06:54.890537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:26.396 [2024-12-07 10:06:54.890544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:26.396 [2024-12-07 10:06:54.890551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:26.396 [2024-12-07 10:06:54.890561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.396 [2024-12-07 10:06:54.900421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:26.396 [2024-12-07 10:06:54.900633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.396 [2024-12-07 10:06:54.900649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8bac30 with addr=10.0.0.2, port=4420 00:32:26.396 [2024-12-07 10:06:54.900658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bac30 is same with the state(6) to be set 00:32:26.396 [2024-12-07 10:06:54.900669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bac30 (9): Bad file descriptor 00:32:26.396 [2024-12-07 10:06:54.900680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:26.396 [2024-12-07 10:06:54.900686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:26.396 [2024-12-07 10:06:54.900693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:26.396 [2024-12-07 10:06:54.900703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:26.396 [2024-12-07 10:06:54.910476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.396 [2024-12-07 10:06:54.910659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.396 [2024-12-07 10:06:54.910675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8bac30 with addr=10.0.0.2, port=4420 00:32:26.396 [2024-12-07 10:06:54.910683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bac30 is same with the state(6) to be set 00:32:26.396 [2024-12-07 10:06:54.910694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bac30 (9): Bad file descriptor 00:32:26.396 [2024-12-07 10:06:54.910705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:26.396 [2024-12-07 10:06:54.910712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:26.396 [2024-12-07 10:06:54.910719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:26.396 [2024-12-07 10:06:54.910728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:26.396 [2024-12-07 10:06:54.920530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:26.396 [2024-12-07 10:06:54.920773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.396 [2024-12-07 10:06:54.920788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8bac30 with addr=10.0.0.2, port=4420 00:32:26.396 [2024-12-07 10:06:54.920796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bac30 is same with the state(6) to be set 00:32:26.396 [2024-12-07 10:06:54.920808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bac30 (9): Bad file descriptor 00:32:26.396 [2024-12-07 10:06:54.920825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:26.396 [2024-12-07 10:06:54.920832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:26.396 [2024-12-07 10:06:54.920840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:26.396 [2024-12-07 10:06:54.920851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.396 [2024-12-07 10:06:54.930587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:26.396 [2024-12-07 10:06:54.930766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.396 [2024-12-07 10:06:54.930779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8bac30 with addr=10.0.0.2, port=4420 00:32:26.396 [2024-12-07 10:06:54.930786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bac30 is same with the state(6) to be set 00:32:26.396 [2024-12-07 10:06:54.930797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bac30 (9): Bad file descriptor 00:32:26.396 [2024-12-07 10:06:54.930808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:26.396 [2024-12-07 10:06:54.930815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:26.396 [2024-12-07 10:06:54.930825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:26.396 [2024-12-07 10:06:54.930835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.396 [2024-12-07 10:06:54.939892] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:26.396 [2024-12-07 10:06:54.939909] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.396 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.397 10:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:26.397 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.655 10:06:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.588 [2024-12-07 10:06:56.263392] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:27.588 [2024-12-07 10:06:56.263408] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:27.588 [2024-12-07 10:06:56.263418] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:27.846 [2024-12-07 10:06:56.389815] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:27.846 [2024-12-07 10:06:56.490712] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:27.846 [2024-12-07 10:06:56.490740] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.846 request: 00:32:27.846 { 00:32:27.846 "name": "nvme", 00:32:27.846 "trtype": "tcp", 00:32:27.846 "traddr": "10.0.0.2", 00:32:27.846 "adrfam": "ipv4", 00:32:27.846 "trsvcid": "8009", 00:32:27.846 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:27.846 "wait_for_attach": true, 00:32:27.846 "method": "bdev_nvme_start_discovery", 00:32:27.846 "req_id": 1 00:32:27.846 } 00:32:27.846 Got JSON-RPC error response 00:32:27.846 response: 00:32:27.846 { 00:32:27.846 "code": -17, 00:32:27.846 "message": "File exists" 00:32:27.846 } 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.846 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:27.847 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.847 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:27.847 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.847 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:27.847 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:27.847 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:27.847 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.847 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:27.847 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.847 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:27.847 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.106 request: 00:32:28.106 { 00:32:28.106 "name": "nvme_second", 00:32:28.106 "trtype": "tcp", 00:32:28.106 "traddr": "10.0.0.2", 00:32:28.106 "adrfam": "ipv4", 00:32:28.106 "trsvcid": "8009", 00:32:28.106 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:28.106 "wait_for_attach": true, 00:32:28.106 "method": "bdev_nvme_start_discovery", 00:32:28.106 "req_id": 1 00:32:28.106 } 00:32:28.106 Got JSON-RPC error response 00:32:28.106 response: 00:32:28.106 { 00:32:28.106 "code": -17, 00:32:28.106 "message": "File exists" 00:32:28.106 } 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.106 10:06:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.039 [2024-12-07 10:06:57.734556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.039 [2024-12-07 10:06:57.734584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d1450 with addr=10.0.0.2, port=8010 00:32:29.039 [2024-12-07 10:06:57.734599] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:29.039 [2024-12-07 10:06:57.734606] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:29.039 [2024-12-07 10:06:57.734612] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:30.410 [2024-12-07 10:06:58.737005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.410 [2024-12-07 10:06:58.737031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d1450 with addr=10.0.0.2, port=8010 00:32:30.410 [2024-12-07 10:06:58.737043] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:30.410 [2024-12-07 10:06:58.737049] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:30.410 [2024-12-07 10:06:58.737056] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:31.344 [2024-12-07 10:06:59.739173] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:31.344 request: 00:32:31.344 { 00:32:31.344 "name": "nvme_second", 00:32:31.344 "trtype": "tcp", 00:32:31.344 "traddr": "10.0.0.2", 00:32:31.344 "adrfam": "ipv4", 00:32:31.344 "trsvcid": "8010", 00:32:31.344 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:31.344 "wait_for_attach": false, 00:32:31.344 "attach_timeout_ms": 3000, 00:32:31.344 "method": "bdev_nvme_start_discovery", 00:32:31.344 "req_id": 1 00:32:31.344 } 00:32:31.344 Got JSON-RPC error response 00:32:31.344 response: 00:32:31.344 { 00:32:31.344 "code": -110, 00:32:31.344 "message": "Connection timed out" 00:32:31.344 } 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1418164 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:31.344 rmmod nvme_tcp 00:32:31.344 rmmod nvme_fabrics 00:32:31.344 rmmod nvme_keyring 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 1418134 ']' 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 1418134 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1418134 ']' 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1418134 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418134 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418134' 00:32:31.344 killing process with pid 1418134 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1418134 00:32:31.344 10:06:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1418134 00:32:31.603 10:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:31.603 10:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:31.603 10:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:31.603 10:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:31.603 10:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:32:31.603 10:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:31.603 10:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:32:31.603 10:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:31.603 10:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:31.603 10:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.603 10:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.603 10:07:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.508 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:33.508 00:32:33.508 real 0m17.056s 00:32:33.508 user 0m20.414s 00:32:33.508 sys 0m5.721s 00:32:33.508 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:33.508 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.508 ************************************ 00:32:33.508 END TEST nvmf_host_discovery 00:32:33.508 ************************************ 00:32:33.508 10:07:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:33.508 10:07:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:33.508 10:07:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:33.508 10:07:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.767 ************************************ 00:32:33.767 START TEST nvmf_host_multipath_status 00:32:33.767 ************************************ 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:33.767 * Looking for test storage... 00:32:33.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:33.767 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:33.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.768 --rc genhtml_branch_coverage=1 00:32:33.768 --rc genhtml_function_coverage=1 00:32:33.768 --rc genhtml_legend=1 00:32:33.768 --rc geninfo_all_blocks=1 00:32:33.768 --rc geninfo_unexecuted_blocks=1 00:32:33.768 00:32:33.768 ' 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:33.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.768 --rc genhtml_branch_coverage=1 00:32:33.768 --rc genhtml_function_coverage=1 00:32:33.768 --rc genhtml_legend=1 00:32:33.768 --rc geninfo_all_blocks=1 00:32:33.768 --rc geninfo_unexecuted_blocks=1 00:32:33.768 00:32:33.768 ' 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:33.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.768 --rc genhtml_branch_coverage=1 00:32:33.768 --rc genhtml_function_coverage=1 00:32:33.768 --rc genhtml_legend=1 00:32:33.768 --rc geninfo_all_blocks=1 00:32:33.768 --rc geninfo_unexecuted_blocks=1 00:32:33.768 00:32:33.768 ' 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:33.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.768 --rc genhtml_branch_coverage=1 00:32:33.768 --rc genhtml_function_coverage=1 00:32:33.768 --rc genhtml_legend=1 00:32:33.768 --rc geninfo_all_blocks=1 00:32:33.768 --rc geninfo_unexecuted_blocks=1 00:32:33.768 00:32:33.768 ' 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:33.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.768 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.769 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:33.769 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:33.769 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:33.769 10:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:39.033 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:39.034 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:39.034 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:39.034 Found net devices under 0000:86:00.0: cvl_0_0 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:39.034 Found net devices under 0000:86:00.1: cvl_0_1 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:39.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:39.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:32:39.034 00:32:39.034 --- 10.0.0.2 ping statistics --- 00:32:39.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.034 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:39.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:32:39.034 00:32:39.034 --- 10.0.0.1 ping statistics --- 00:32:39.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.034 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=1423050 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 1423050 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:39.034 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1423050 ']' 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:39.035 [2024-12-07 10:07:07.525819] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:39.035 [2024-12-07 10:07:07.525865] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.035 [2024-12-07 10:07:07.583497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:39.035 [2024-12-07 10:07:07.624823] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.035 [2024-12-07 10:07:07.624864] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.035 [2024-12-07 10:07:07.624871] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.035 [2024-12-07 10:07:07.624877] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.035 [2024-12-07 10:07:07.624883] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.035 [2024-12-07 10:07:07.624922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.035 [2024-12-07 10:07:07.624926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1423050 00:32:39.035 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:39.293 [2024-12-07 10:07:07.919748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.293 10:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:39.567 Malloc0 00:32:39.567 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:39.826 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:39.826 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:40.084 [2024-12-07 10:07:08.691525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.084 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:40.343 [2024-12-07 10:07:08.884015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:40.343 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1423300 00:32:40.343 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:40.343 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1423300 /var/tmp/bdevperf.sock 00:32:40.343 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1423300 ']' 00:32:40.343 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:40.343 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:40.343 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:40.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:40.343 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:40.343 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:40.343 10:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:40.601 10:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:40.601 10:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:40.601 10:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:40.601 10:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:41.168 Nvme0n1 00:32:41.168 10:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:41.426 Nvme0n1 00:32:41.426 10:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:41.426 10:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:43.328 10:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:43.328 10:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:43.585 10:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:43.844 10:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:44.778 10:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:44.778 10:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:44.778 10:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.778 10:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:45.036 10:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.036 10:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:45.036 10:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.036 10:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:45.294 10:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:45.294 10:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:45.294 10:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.294 10:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:45.294 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.294 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:45.294 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.294 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:45.552 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.552 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:45.552 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.552 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:45.811 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.811 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:45.811 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.811 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:46.069 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.069 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:46.069 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:46.327 10:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:46.327 10:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:47.698 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:47.698 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:47.698 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.698 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:47.698 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.698 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:47.698 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.698 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:47.956 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:47.956 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:47.956 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.956 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:47.956 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:47.956 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:47.956 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.956 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:48.214 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.214 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:48.214 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.214 10:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:48.472 10:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.473 10:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:48.473 10:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.473 10:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:48.731 10:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.731 10:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:48.731 10:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:48.990 10:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:48.990 10:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:50.365 10:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:50.365 10:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:50.365 10:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.365 10:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:50.365 10:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.365 10:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:50.365 10:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.365 10:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:50.365 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:50.365 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:50.365 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.365 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:50.624 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.624 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:50.624 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.624 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:50.882 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.882 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:50.882 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.882 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:51.140 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.140 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:51.140 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.140 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:51.399 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.399 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:51.399 10:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:51.399 10:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:51.658 10:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:52.639 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:52.639 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:52.639 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.639 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:52.896 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.896 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:52.896 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.896 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:53.154 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:53.154 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:53.154 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.154 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:53.411 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.411 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:53.411 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.411 10:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:53.667 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.668 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:53.668 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.668 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:53.668 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.668 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:53.668 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.668 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:53.925 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:53.925 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:53.925 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:54.182 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:54.440 10:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:55.375 10:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:55.375 10:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:55.375 10:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.375 10:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:55.633 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:55.633 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:55.633 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.633 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:55.633 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:55.633 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:55.891 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.891 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:55.891 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.891 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:55.891 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.891 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:56.149 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.149 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:56.149 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.149 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:56.407 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:56.407 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:56.407 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.407 10:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:56.665 10:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:56.665 10:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:56.665 10:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:56.665 10:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:56.921 10:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:57.851 10:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:57.851 10:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:57.851 10:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.851 10:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:58.108 10:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:58.108 10:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:58.108 10:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.108 10:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:58.367 10:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.367 10:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:58.367 10:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:58.367 10:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.625 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.625 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:58.625 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.625 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:58.883 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.883 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:58.883 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.883 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:58.883 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:58.883 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:58.883 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:58.883 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.141 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.141 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:59.399 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:59.399 10:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:59.657 10:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:59.657 10:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:01.034 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:01.034 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:01.034 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.034 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:01.034 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.034 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:01.034 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.034 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:01.293 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.293 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:01.293 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.293 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:01.293 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.293 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:01.293 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.293 10:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:01.552 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.552 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:01.552 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.552 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:01.810 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.810 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:01.810 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.810 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:02.069 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.069 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:02.069 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:02.328 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:02.328 10:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:03.704 10:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:03.704 10:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:03.704 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.704 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:03.704 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:03.704 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:03.704 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.704 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:03.704 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.704 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:03.704 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.704 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:03.963 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.963 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:03.963 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.963 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:04.222 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.222 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:04.222 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.222 10:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:04.481 10:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.481 10:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:04.481 10:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.481 10:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:04.739 10:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.739 10:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:04.739 10:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:04.739 10:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:04.997 10:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:05.933 10:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:05.933 10:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:05.933 10:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.933 10:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:06.192 10:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.192 10:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:06.192 10:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.192 10:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:06.451 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.451 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:06.451 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.451 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:06.709 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.709 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:06.709 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.709 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:06.968 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.968 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:06.968 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.968 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:06.968 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.968 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:06.968 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.968 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:07.226 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.226 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:07.226 10:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:07.484 10:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:07.742 10:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:08.677 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:08.677 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:08.677 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.677 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:08.937 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.937 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:08.937 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.937 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:09.196 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:09.196 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:09.196 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.196 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:09.196 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.196 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:09.196 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.196 10:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:09.454 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.454 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:09.454 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.454 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:09.712 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.712 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:09.712 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.712 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:09.971 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:09.971 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1423300 00:33:09.971 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1423300 ']' 00:33:09.971 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1423300 00:33:09.971 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:09.971 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:09.971 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1423300 00:33:09.971 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:33:09.971 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:33:09.971 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1423300' 00:33:09.971 killing process with pid 1423300 00:33:09.971 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1423300 00:33:09.971 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1423300 00:33:09.971 { 00:33:09.971 "results": [ 00:33:09.971 { 00:33:09.971 "job": "Nvme0n1", 00:33:09.971 "core_mask": "0x4", 00:33:09.971 "workload": "verify", 00:33:09.971 "status": "terminated", 00:33:09.971 "verify_range": { 00:33:09.971 "start": 0, 00:33:09.971 "length": 16384 00:33:09.971 }, 00:33:09.971 "queue_depth": 128, 00:33:09.971 "io_size": 4096, 00:33:09.971 "runtime": 28.44724, 00:33:09.971 "iops": 10151.810861088808, 00:33:09.971 "mibps": 39.65551117612816, 00:33:09.971 "io_failed": 0, 00:33:09.971 "io_timeout": 0, 00:33:09.971 "avg_latency_us": 12588.047573998527, 00:33:09.971 "min_latency_us": 223.49913043478261, 00:33:09.971 "max_latency_us": 3019898.88 00:33:09.971 } 00:33:09.971 ], 00:33:09.971 "core_count": 1 00:33:09.971 } 00:33:10.233 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1423300 00:33:10.233 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:10.233 [2024-12-07 10:07:08.948159] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:33:10.233 [2024-12-07 10:07:08.948210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423300 ] 00:33:10.233 [2024-12-07 10:07:08.997924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.233 [2024-12-07 10:07:09.037543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:10.233 [2024-12-07 10:07:09.871786] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:33:10.233 Running I/O for 90 seconds... 00:33:10.233 10861.00 IOPS, 42.43 MiB/s [2024-12-07T09:07:38.959Z] 10915.50 IOPS, 42.64 MiB/s [2024-12-07T09:07:38.959Z] 10977.33 IOPS, 42.88 MiB/s [2024-12-07T09:07:38.959Z] 10977.00 IOPS, 42.88 MiB/s [2024-12-07T09:07:38.959Z] 11006.40 IOPS, 42.99 MiB/s [2024-12-07T09:07:38.959Z] 10983.83 IOPS, 42.91 MiB/s [2024-12-07T09:07:38.959Z] 10960.29 IOPS, 42.81 MiB/s [2024-12-07T09:07:38.959Z] 10944.75 IOPS, 42.75 MiB/s [2024-12-07T09:07:38.959Z] 10937.33 IOPS, 42.72 MiB/s [2024-12-07T09:07:38.959Z] 10936.50 IOPS, 42.72 MiB/s [2024-12-07T09:07:38.959Z] 10940.18 IOPS, 42.74 MiB/s [2024-12-07T09:07:38.959Z] 10930.83 IOPS, 42.70 MiB/s [2024-12-07T09:07:38.959Z] [2024-12-07 10:07:22.750505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.233 [2024-12-07 10:07:22.750543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:10.233 [2024-12-07 10:07:22.750576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.233 [2024-12-07 10:07:22.750585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:10.233 [2024-12-07 10:07:22.750599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.233 [2024-12-07 10:07:22.750606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:10.233 [2024-12-07 10:07:22.750620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.233 [2024-12-07 10:07:22.750628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:10.233 [2024-12-07 10:07:22.750640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.233 [2024-12-07 10:07:22.750647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:10.233 [2024-12-07 10:07:22.750659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.233 [2024-12-07 10:07:22.750666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:10.233 [2024-12-07 10:07:22.750678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.233 [2024-12-07 10:07:22.750686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:10.233 [2024-12-07 10:07:22.750698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.233 [2024-12-07 10:07:22.750706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:10.233 [2024-12-07 10:07:22.750718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.233 [2024-12-07 10:07:22.750725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:10.233 [2024-12-07 10:07:22.750744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.233 [2024-12-07 10:07:22.750754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:10.233 [2024-12-07 10:07:22.750766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.233 [2024-12-07 10:07:22.750774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:10.233 [2024-12-07 10:07:22.750786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.234 [2024-12-07 10:07:22.750793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.750806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.234 [2024-12-07 10:07:22.750813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.750825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.234 [2024-12-07 10:07:22.750833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.750846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.234 [2024-12-07 10:07:22.750853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.750866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.234 [2024-12-07 10:07:22.750874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.234 [2024-12-07 10:07:22.751048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.234 [2024-12-07 10:07:22.751070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.234 [2024-12-07 10:07:22.751092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.234 [2024-12-07 10:07:22.751113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.234 [2024-12-07 10:07:22.751134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.234 [2024-12-07 10:07:22.751161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.234 [2024-12-07 10:07:22.751183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.751531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.751538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.752025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.752035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.752050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.752058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.752073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.752080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.752094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.752101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.752116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.752123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.752139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.752148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.752162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.752169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.752184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.752191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.752206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.752213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:10.234 [2024-12-07 10:07:22.752227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.234 [2024-12-07 10:07:22.752235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.752978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.752986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.753002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.753009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.753027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.753035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.753051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.753058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.753074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.753081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.753097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.753105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.753121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.753128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.753143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.753151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:10.235 [2024-12-07 10:07:22.753166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.235 [2024-12-07 10:07:22.753173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.236 [2024-12-07 10:07:22.753412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.236 [2024-12-07 10:07:22.753434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.236 [2024-12-07 10:07:22.753458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.236 [2024-12-07 10:07:22.753480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.236 [2024-12-07 10:07:22.753503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.236 [2024-12-07 10:07:22.753526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.753979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.753987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.754005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.754013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.754036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.236 [2024-12-07 10:07:22.754044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.754062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.754069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.754087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.754095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.754113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.754120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.754138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.754145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.754163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.754170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.754188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.754195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.754214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.236 [2024-12-07 10:07:22.754221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:10.236 [2024-12-07 10:07:22.754240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-12-07 10:07:22.754247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:10.237 10663.85 IOPS, 41.66 MiB/s [2024-12-07T09:07:38.963Z] 9902.14 IOPS, 38.68 MiB/s [2024-12-07T09:07:38.963Z] 9242.00 IOPS, 36.10 MiB/s [2024-12-07T09:07:38.963Z] 8889.94 IOPS, 34.73 MiB/s [2024-12-07T09:07:38.963Z] 9021.12 IOPS, 35.24 MiB/s [2024-12-07T09:07:38.963Z] 9118.89 IOPS, 35.62 MiB/s [2024-12-07T09:07:38.963Z] 9320.00 IOPS, 36.41 MiB/s [2024-12-07T09:07:38.963Z] 9508.35 IOPS, 37.14 MiB/s [2024-12-07T09:07:38.963Z] 9646.71 IOPS, 37.68 MiB/s [2024-12-07T09:07:38.963Z] 9701.64 IOPS, 37.90 MiB/s [2024-12-07T09:07:38.963Z] 9749.96 IOPS, 38.09 MiB/s [2024-12-07T09:07:38.963Z] 9830.62 IOPS, 38.40 MiB/s [2024-12-07T09:07:38.963Z] 9951.48 IOPS, 38.87 MiB/s [2024-12-07T09:07:38.963Z] 10064.15 IOPS, 39.31 MiB/s [2024-12-07T09:07:38.963Z] [2024-12-07 10:07:36.259164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-12-07 10:07:36.259767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.259855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.259863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.260160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.237 [2024-12-07 10:07:36.260174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:10.237 [2024-12-07 10:07:36.260188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.260195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.260216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.260297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.260318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.260455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.260474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.260494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.260514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.260535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.260554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.260575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.260646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.260653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.261740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.261757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.261773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.238 [2024-12-07 10:07:36.261781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.261794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.261802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.261815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.261823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.261835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.261843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.261855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.261863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:10.238 [2024-12-07 10:07:36.261881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.238 [2024-12-07 10:07:36.261888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:10.238 10105.56 IOPS, 39.47 MiB/s [2024-12-07T09:07:38.964Z] 10142.61 IOPS, 39.62 MiB/s [2024-12-07T09:07:38.964Z] Received shutdown signal, test time was about 28.447885 seconds 00:33:10.238 00:33:10.238 Latency(us) 00:33:10.238 [2024-12-07T09:07:38.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.238 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:10.238 Verification LBA range: start 0x0 length 0x4000 00:33:10.238 Nvme0n1 : 28.45 10151.81 39.66 0.00 0.00 12588.05 223.50 3019898.88 00:33:10.238 [2024-12-07T09:07:38.964Z] =================================================================================================================== 00:33:10.238 [2024-12-07T09:07:38.964Z] Total : 10151.81 39.66 0.00 0.00 12588.05 223.50 3019898.88 00:33:10.238 [2024-12-07 10:07:38.568223] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:33:10.238 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:10.239 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:10.239 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:10.239 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:10.239 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:10.239 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:10.239 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:10.239 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:10.239 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:10.239 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:10.239 rmmod nvme_tcp 00:33:10.498 rmmod nvme_fabrics 00:33:10.498 rmmod nvme_keyring 00:33:10.498 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:10.498 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:10.498 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:10.498 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 1423050 ']' 00:33:10.498 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 1423050 00:33:10.498 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1423050 ']' 00:33:10.498 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1423050 00:33:10.498 10:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:10.498 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:10.498 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1423050 00:33:10.498 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:10.498 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:10.498 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1423050' 00:33:10.498 killing process with pid 1423050 00:33:10.498 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1423050 00:33:10.499 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1423050 00:33:10.757 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:10.757 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:10.757 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:10.757 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:10.757 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:33:10.757 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:10.757 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:33:10.757 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:10.757 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:10.757 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.757 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.757 10:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.662 10:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:12.662 00:33:12.662 real 0m39.090s 00:33:12.662 user 1m47.511s 00:33:12.662 sys 0m10.872s 00:33:12.662 10:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:12.662 10:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:12.662 ************************************ 00:33:12.662 END TEST nvmf_host_multipath_status 00:33:12.662 ************************************ 00:33:12.662 10:07:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:12.662 10:07:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:12.662 10:07:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:12.662 10:07:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.923 ************************************ 00:33:12.923 START TEST nvmf_discovery_remove_ifc 00:33:12.923 ************************************ 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:12.923 * Looking for test storage... 00:33:12.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:12.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.923 --rc genhtml_branch_coverage=1 00:33:12.923 --rc genhtml_function_coverage=1 00:33:12.923 --rc genhtml_legend=1 00:33:12.923 --rc geninfo_all_blocks=1 00:33:12.923 --rc geninfo_unexecuted_blocks=1 00:33:12.923 00:33:12.923 ' 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:12.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.923 --rc genhtml_branch_coverage=1 00:33:12.923 --rc genhtml_function_coverage=1 00:33:12.923 --rc genhtml_legend=1 00:33:12.923 --rc geninfo_all_blocks=1 00:33:12.923 --rc geninfo_unexecuted_blocks=1 00:33:12.923 00:33:12.923 ' 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:12.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.923 --rc genhtml_branch_coverage=1 00:33:12.923 --rc genhtml_function_coverage=1 00:33:12.923 --rc genhtml_legend=1 00:33:12.923 --rc geninfo_all_blocks=1 00:33:12.923 --rc geninfo_unexecuted_blocks=1 00:33:12.923 00:33:12.923 ' 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:12.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.923 --rc genhtml_branch_coverage=1 00:33:12.923 --rc genhtml_function_coverage=1 00:33:12.923 --rc genhtml_legend=1 00:33:12.923 --rc geninfo_all_blocks=1 00:33:12.923 --rc geninfo_unexecuted_blocks=1 00:33:12.923 00:33:12.923 ' 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.923 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:12.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:12.924 10:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:18.181 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:18.181 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:18.182 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:18.182 Found net devices under 0000:86:00.0: cvl_0_0 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:18.182 Found net devices under 0000:86:00.1: cvl_0_1 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.182 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.439 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.440 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.440 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:18.440 10:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:18.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:33:18.440 00:33:18.440 --- 10.0.0.2 ping statistics --- 00:33:18.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.440 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:33:18.440 00:33:18.440 --- 10.0.0.1 ping statistics --- 00:33:18.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.440 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=1431780 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 1431780 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1431780 ']' 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:18.440 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.440 [2024-12-07 10:07:47.134524] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:33:18.440 [2024-12-07 10:07:47.134570] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.697 [2024-12-07 10:07:47.193611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.697 [2024-12-07 10:07:47.233745] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.697 [2024-12-07 10:07:47.233781] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.697 [2024-12-07 10:07:47.233791] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:18.697 [2024-12-07 10:07:47.233798] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:18.697 [2024-12-07 10:07:47.233803] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.698 [2024-12-07 10:07:47.233827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.698 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:18.698 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:18.698 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:18.698 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:18.698 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.698 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.698 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:18.698 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.698 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.698 [2024-12-07 10:07:47.375517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.698 [2024-12-07 10:07:47.383683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:18.698 null0 00:33:18.698 [2024-12-07 10:07:47.415683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1431803 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1431803 /tmp/host.sock 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1431803 ']' 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:18.955 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.955 [2024-12-07 10:07:47.484402] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:33:18.955 [2024-12-07 10:07:47.484442] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431803 ] 00:33:18.955 [2024-12-07 10:07:47.538380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.955 [2024-12-07 10:07:47.578152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.955 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:19.213 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.213 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:19.213 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.213 10:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.141 [2024-12-07 10:07:48.772025] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:20.141 [2024-12-07 10:07:48.772048] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:20.141 [2024-12-07 10:07:48.772061] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:20.141 [2024-12-07 10:07:48.858329] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:20.399 [2024-12-07 10:07:49.076427] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:20.399 [2024-12-07 10:07:49.076472] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:20.399 [2024-12-07 10:07:49.076491] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:20.399 [2024-12-07 10:07:49.076505] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:20.399 [2024-12-07 10:07:49.076522] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:20.399 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.399 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:20.399 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:20.399 [2024-12-07 10:07:49.081287] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2216690 was disconnected and freed. delete nvme_qpair. 00:33:20.399 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.399 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:20.399 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.399 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:20.399 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.399 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:20.399 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.657 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:20.657 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:20.657 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:20.657 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:20.657 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:20.657 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:20.657 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.657 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:20.657 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.657 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:20.658 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.658 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.658 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:20.658 10:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:21.592 10:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:21.592 10:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:21.592 10:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:21.592 10:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:21.592 10:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.592 10:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.592 10:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:21.592 10:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.850 10:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:21.850 10:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:22.785 10:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:22.785 10:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.785 10:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:22.785 10:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.785 10:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:22.785 10:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.785 10:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:22.785 10:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.785 10:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:22.785 10:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:23.718 10:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:23.718 10:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.718 10:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:23.718 10:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:23.718 10:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.718 10:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:23.718 10:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.718 10:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.718 10:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:23.718 10:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:25.092 10:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:25.092 10:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:25.092 10:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:25.092 10:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:25.092 10:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.092 10:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:25.092 10:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.092 10:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.092 10:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:25.092 10:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:26.026 10:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:26.026 10:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:26.026 10:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:26.026 10:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.026 10:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:26.026 10:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:26.026 10:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:26.026 10:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.026 [2024-12-07 10:07:54.518130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:26.026 [2024-12-07 10:07:54.518173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.026 [2024-12-07 10:07:54.518186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.026 [2024-12-07 10:07:54.518198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.026 [2024-12-07 10:07:54.518206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.026 [2024-12-07 10:07:54.518214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.026 [2024-12-07 10:07:54.518223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.026 [2024-12-07 10:07:54.518232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.026 [2024-12-07 10:07:54.518241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.026 [2024-12-07 10:07:54.518254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.026 [2024-12-07 10:07:54.518263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.026 [2024-12-07 10:07:54.518271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2e00 is same with the state(6) to be set 00:33:26.026 10:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:26.026 10:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:26.027 [2024-12-07 10:07:54.528152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f2e00 (9): Bad file descriptor 00:33:26.027 [2024-12-07 10:07:54.538191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:26.958 10:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:26.958 10:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:26.958 10:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:26.958 10:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:26.958 10:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.958 10:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:26.958 10:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:26.958 [2024-12-07 10:07:55.567986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:26.958 [2024-12-07 10:07:55.568033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f2e00 with addr=10.0.0.2, port=4420 00:33:26.958 [2024-12-07 10:07:55.568050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2e00 is same with the state(6) to be set 00:33:26.958 [2024-12-07 10:07:55.568085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f2e00 (9): Bad file descriptor 00:33:26.958 [2024-12-07 10:07:55.568520] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:26.958 [2024-12-07 10:07:55.568552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:26.958 [2024-12-07 10:07:55.568564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:26.958 [2024-12-07 10:07:55.568575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:26.958 [2024-12-07 10:07:55.568597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.958 [2024-12-07 10:07:55.568608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:26.958 10:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.958 10:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:26.958 10:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:27.890 [2024-12-07 10:07:56.571092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:27.890 [2024-12-07 10:07:56.571118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:27.890 [2024-12-07 10:07:56.571126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:27.890 [2024-12-07 10:07:56.571133] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:27.890 [2024-12-07 10:07:56.571147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.890 [2024-12-07 10:07:56.571173] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:27.890 [2024-12-07 10:07:56.571197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.890 [2024-12-07 10:07:56.571207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.890 [2024-12-07 10:07:56.571216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.890 [2024-12-07 10:07:56.571223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.890 [2024-12-07 10:07:56.571230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.890 [2024-12-07 10:07:56.571239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.890 [2024-12-07 10:07:56.571246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.890 [2024-12-07 10:07:56.571252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.890 [2024-12-07 10:07:56.571261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.890 [2024-12-07 10:07:56.571267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.890 [2024-12-07 10:07:56.571274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:27.890 [2024-12-07 10:07:56.571307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e2500 (9): Bad file descriptor 00:33:27.890 [2024-12-07 10:07:56.572308] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:27.890 [2024-12-07 10:07:56.572318] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:27.890 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:27.890 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:27.890 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:27.890 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.890 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:27.890 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:27.890 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:27.890 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:28.148 10:07:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:29.080 10:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:29.080 10:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:29.080 10:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:29.080 10:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:29.080 10:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.080 10:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:29.080 10:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:29.080 10:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.338 10:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:29.338 10:07:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:29.906 [2024-12-07 10:07:58.628103] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:29.906 [2024-12-07 10:07:58.628121] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:29.906 [2024-12-07 10:07:58.628137] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:30.164 [2024-12-07 10:07:58.715398] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:30.164 10:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:30.164 10:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:30.164 10:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:30.164 10:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.164 10:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:30.164 10:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.164 10:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:30.164 10:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.164 10:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:30.164 10:07:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:30.424 [2024-12-07 10:07:58.941277] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:30.424 [2024-12-07 10:07:58.941311] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:30.424 [2024-12-07 10:07:58.941327] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:30.424 [2024-12-07 10:07:58.941340] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:30.424 [2024-12-07 10:07:58.941347] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:30.424 [2024-12-07 10:07:58.946732] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21c8800 was disconnected and freed. delete nvme_qpair. 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1431803 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1431803 ']' 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1431803 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1431803 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1431803' 00:33:31.360 killing process with pid 1431803 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1431803 00:33:31.360 10:07:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1431803 00:33:31.627 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:31.628 rmmod nvme_tcp 00:33:31.628 rmmod nvme_fabrics 00:33:31.628 rmmod nvme_keyring 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 1431780 ']' 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 1431780 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1431780 ']' 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1431780 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1431780 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1431780' 00:33:31.628 killing process with pid 1431780 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1431780 00:33:31.628 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1431780 00:33:31.997 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:31.997 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:31.997 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:31.997 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:31.997 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:33:31.997 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:31.997 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:33:31.997 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:31.997 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:31.997 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.997 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:31.997 10:08:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:34.040 00:33:34.040 real 0m21.101s 00:33:34.040 user 0m26.759s 00:33:34.040 sys 0m5.520s 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.040 ************************************ 00:33:34.040 END TEST nvmf_discovery_remove_ifc 00:33:34.040 ************************************ 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.040 ************************************ 00:33:34.040 START TEST nvmf_identify_kernel_target 00:33:34.040 ************************************ 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:34.040 * Looking for test storage... 00:33:34.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:34.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.040 --rc genhtml_branch_coverage=1 00:33:34.040 --rc genhtml_function_coverage=1 00:33:34.040 --rc genhtml_legend=1 00:33:34.040 --rc geninfo_all_blocks=1 00:33:34.040 --rc geninfo_unexecuted_blocks=1 00:33:34.040 00:33:34.040 ' 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:34.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.040 --rc genhtml_branch_coverage=1 00:33:34.040 --rc genhtml_function_coverage=1 00:33:34.040 --rc genhtml_legend=1 00:33:34.040 --rc geninfo_all_blocks=1 00:33:34.040 --rc geninfo_unexecuted_blocks=1 00:33:34.040 00:33:34.040 ' 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:34.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.040 --rc genhtml_branch_coverage=1 00:33:34.040 --rc genhtml_function_coverage=1 00:33:34.040 --rc genhtml_legend=1 00:33:34.040 --rc geninfo_all_blocks=1 00:33:34.040 --rc geninfo_unexecuted_blocks=1 00:33:34.040 00:33:34.040 ' 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:34.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.040 --rc genhtml_branch_coverage=1 00:33:34.040 --rc genhtml_function_coverage=1 00:33:34.040 --rc genhtml_legend=1 00:33:34.040 --rc geninfo_all_blocks=1 00:33:34.040 --rc geninfo_unexecuted_blocks=1 00:33:34.040 00:33:34.040 ' 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.040 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.041 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.041 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:34.041 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.041 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:34.041 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:34.041 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:34.041 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:34.041 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:34.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:34.299 10:08:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:39.564 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:39.564 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:39.564 Found net devices under 0000:86:00.0: cvl_0_0 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:39.564 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:39.565 Found net devices under 0000:86:00.1: cvl_0_1 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:39.565 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:39.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:33:39.825 00:33:39.825 --- 10.0.0.2 ping statistics --- 00:33:39.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.825 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:33:39.825 00:33:39.825 --- 10.0.0.1 ping statistics --- 00:33:39.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.825 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:39.825 10:08:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:42.362 Waiting for block devices as requested 00:33:42.362 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:42.362 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:42.362 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:42.621 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:42.621 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:42.621 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:42.621 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:42.880 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:42.880 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:42.880 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:42.880 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:43.139 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:43.139 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:43.139 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:43.139 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:43.398 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:43.398 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:43.398 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:33:43.398 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:43.398 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:33:43.398 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:43.398 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:43.398 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:43.398 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:33:43.398 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:43.398 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:43.398 No valid GPT data, bailing 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:43.659 00:33:43.659 Discovery Log Number of Records 2, Generation counter 2 00:33:43.659 =====Discovery Log Entry 0====== 00:33:43.659 trtype: tcp 00:33:43.659 adrfam: ipv4 00:33:43.659 subtype: current discovery subsystem 00:33:43.659 treq: not specified, sq flow control disable supported 00:33:43.659 portid: 1 00:33:43.659 trsvcid: 4420 00:33:43.659 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:43.659 traddr: 10.0.0.1 00:33:43.659 eflags: none 00:33:43.659 sectype: none 00:33:43.659 =====Discovery Log Entry 1====== 00:33:43.659 trtype: tcp 00:33:43.659 adrfam: ipv4 00:33:43.659 subtype: nvme subsystem 00:33:43.659 treq: not specified, sq flow control disable supported 00:33:43.659 portid: 1 00:33:43.659 trsvcid: 4420 00:33:43.659 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:43.659 traddr: 10.0.0.1 00:33:43.659 eflags: none 00:33:43.659 sectype: none 00:33:43.659 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:43.659 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:43.659 ===================================================== 00:33:43.659 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:43.659 ===================================================== 00:33:43.659 Controller Capabilities/Features 00:33:43.659 ================================ 00:33:43.659 Vendor ID: 0000 00:33:43.659 Subsystem Vendor ID: 0000 00:33:43.659 Serial Number: acafbc80ba81b326ffbe 00:33:43.659 Model Number: Linux 00:33:43.659 Firmware Version: 6.8.9-20 00:33:43.659 Recommended Arb Burst: 0 00:33:43.659 IEEE OUI Identifier: 00 00 00 00:33:43.659 Multi-path I/O 00:33:43.659 May have multiple subsystem ports: No 00:33:43.659 May have multiple controllers: No 00:33:43.659 Associated with SR-IOV VF: No 00:33:43.659 Max Data Transfer Size: Unlimited 00:33:43.659 Max Number of Namespaces: 0 00:33:43.659 Max Number of I/O Queues: 1024 00:33:43.659 NVMe Specification Version (VS): 1.3 00:33:43.659 NVMe Specification Version (Identify): 1.3 00:33:43.659 Maximum Queue Entries: 1024 00:33:43.659 Contiguous Queues Required: No 00:33:43.659 Arbitration Mechanisms Supported 00:33:43.659 Weighted Round Robin: Not Supported 00:33:43.659 Vendor Specific: Not Supported 00:33:43.659 Reset Timeout: 7500 ms 00:33:43.659 Doorbell Stride: 4 bytes 00:33:43.659 NVM Subsystem Reset: Not Supported 00:33:43.659 Command Sets Supported 00:33:43.659 NVM Command Set: Supported 00:33:43.659 Boot Partition: Not Supported 00:33:43.659 Memory Page Size Minimum: 4096 bytes 00:33:43.659 Memory Page Size Maximum: 4096 bytes 00:33:43.659 Persistent Memory Region: Not Supported 00:33:43.659 Optional Asynchronous Events Supported 00:33:43.659 Namespace Attribute Notices: Not Supported 00:33:43.659 Firmware Activation Notices: Not Supported 00:33:43.659 ANA Change Notices: Not Supported 00:33:43.659 PLE Aggregate Log Change Notices: Not Supported 00:33:43.659 LBA Status Info Alert Notices: Not Supported 00:33:43.659 EGE Aggregate Log Change Notices: Not Supported 00:33:43.659 Normal NVM Subsystem Shutdown event: Not Supported 00:33:43.659 Zone Descriptor Change Notices: Not Supported 00:33:43.660 Discovery Log Change Notices: Supported 00:33:43.660 Controller Attributes 00:33:43.660 128-bit Host Identifier: Not Supported 00:33:43.660 Non-Operational Permissive Mode: Not Supported 00:33:43.660 NVM Sets: Not Supported 00:33:43.660 Read Recovery Levels: Not Supported 00:33:43.660 Endurance Groups: Not Supported 00:33:43.660 Predictable Latency Mode: Not Supported 00:33:43.660 Traffic Based Keep ALive: Not Supported 00:33:43.660 Namespace Granularity: Not Supported 00:33:43.660 SQ Associations: Not Supported 00:33:43.660 UUID List: Not Supported 00:33:43.660 Multi-Domain Subsystem: Not Supported 00:33:43.660 Fixed Capacity Management: Not Supported 00:33:43.660 Variable Capacity Management: Not Supported 00:33:43.660 Delete Endurance Group: Not Supported 00:33:43.660 Delete NVM Set: Not Supported 00:33:43.660 Extended LBA Formats Supported: Not Supported 00:33:43.660 Flexible Data Placement Supported: Not Supported 00:33:43.660 00:33:43.660 Controller Memory Buffer Support 00:33:43.660 ================================ 00:33:43.660 Supported: No 00:33:43.660 00:33:43.660 Persistent Memory Region Support 00:33:43.660 ================================ 00:33:43.660 Supported: No 00:33:43.660 00:33:43.660 Admin Command Set Attributes 00:33:43.660 ============================ 00:33:43.660 Security Send/Receive: Not Supported 00:33:43.660 Format NVM: Not Supported 00:33:43.660 Firmware Activate/Download: Not Supported 00:33:43.660 Namespace Management: Not Supported 00:33:43.660 Device Self-Test: Not Supported 00:33:43.660 Directives: Not Supported 00:33:43.660 NVMe-MI: Not Supported 00:33:43.660 Virtualization Management: Not Supported 00:33:43.660 Doorbell Buffer Config: Not Supported 00:33:43.660 Get LBA Status Capability: Not Supported 00:33:43.660 Command & Feature Lockdown Capability: Not Supported 00:33:43.660 Abort Command Limit: 1 00:33:43.660 Async Event Request Limit: 1 00:33:43.660 Number of Firmware Slots: N/A 00:33:43.660 Firmware Slot 1 Read-Only: N/A 00:33:43.660 Firmware Activation Without Reset: N/A 00:33:43.660 Multiple Update Detection Support: N/A 00:33:43.660 Firmware Update Granularity: No Information Provided 00:33:43.660 Per-Namespace SMART Log: No 00:33:43.660 Asymmetric Namespace Access Log Page: Not Supported 00:33:43.660 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:43.660 Command Effects Log Page: Not Supported 00:33:43.660 Get Log Page Extended Data: Supported 00:33:43.660 Telemetry Log Pages: Not Supported 00:33:43.660 Persistent Event Log Pages: Not Supported 00:33:43.660 Supported Log Pages Log Page: May Support 00:33:43.660 Commands Supported & Effects Log Page: Not Supported 00:33:43.660 Feature Identifiers & Effects Log Page:May Support 00:33:43.660 NVMe-MI Commands & Effects Log Page: May Support 00:33:43.660 Data Area 4 for Telemetry Log: Not Supported 00:33:43.660 Error Log Page Entries Supported: 1 00:33:43.660 Keep Alive: Not Supported 00:33:43.660 00:33:43.660 NVM Command Set Attributes 00:33:43.660 ========================== 00:33:43.660 Submission Queue Entry Size 00:33:43.660 Max: 1 00:33:43.660 Min: 1 00:33:43.660 Completion Queue Entry Size 00:33:43.660 Max: 1 00:33:43.660 Min: 1 00:33:43.660 Number of Namespaces: 0 00:33:43.660 Compare Command: Not Supported 00:33:43.660 Write Uncorrectable Command: Not Supported 00:33:43.660 Dataset Management Command: Not Supported 00:33:43.660 Write Zeroes Command: Not Supported 00:33:43.660 Set Features Save Field: Not Supported 00:33:43.660 Reservations: Not Supported 00:33:43.660 Timestamp: Not Supported 00:33:43.660 Copy: Not Supported 00:33:43.660 Volatile Write Cache: Not Present 00:33:43.660 Atomic Write Unit (Normal): 1 00:33:43.660 Atomic Write Unit (PFail): 1 00:33:43.660 Atomic Compare & Write Unit: 1 00:33:43.660 Fused Compare & Write: Not Supported 00:33:43.660 Scatter-Gather List 00:33:43.660 SGL Command Set: Supported 00:33:43.660 SGL Keyed: Not Supported 00:33:43.660 SGL Bit Bucket Descriptor: Not Supported 00:33:43.660 SGL Metadata Pointer: Not Supported 00:33:43.660 Oversized SGL: Not Supported 00:33:43.660 SGL Metadata Address: Not Supported 00:33:43.660 SGL Offset: Supported 00:33:43.660 Transport SGL Data Block: Not Supported 00:33:43.660 Replay Protected Memory Block: Not Supported 00:33:43.660 00:33:43.660 Firmware Slot Information 00:33:43.660 ========================= 00:33:43.660 Active slot: 0 00:33:43.660 00:33:43.660 00:33:43.660 Error Log 00:33:43.660 ========= 00:33:43.660 00:33:43.660 Active Namespaces 00:33:43.660 ================= 00:33:43.660 Discovery Log Page 00:33:43.660 ================== 00:33:43.660 Generation Counter: 2 00:33:43.660 Number of Records: 2 00:33:43.660 Record Format: 0 00:33:43.660 00:33:43.660 Discovery Log Entry 0 00:33:43.660 ---------------------- 00:33:43.660 Transport Type: 3 (TCP) 00:33:43.660 Address Family: 1 (IPv4) 00:33:43.660 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:43.660 Entry Flags: 00:33:43.660 Duplicate Returned Information: 0 00:33:43.660 Explicit Persistent Connection Support for Discovery: 0 00:33:43.660 Transport Requirements: 00:33:43.660 Secure Channel: Not Specified 00:33:43.660 Port ID: 1 (0x0001) 00:33:43.660 Controller ID: 65535 (0xffff) 00:33:43.660 Admin Max SQ Size: 32 00:33:43.660 Transport Service Identifier: 4420 00:33:43.660 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:43.660 Transport Address: 10.0.0.1 00:33:43.660 Discovery Log Entry 1 00:33:43.660 ---------------------- 00:33:43.660 Transport Type: 3 (TCP) 00:33:43.660 Address Family: 1 (IPv4) 00:33:43.660 Subsystem Type: 2 (NVM Subsystem) 00:33:43.660 Entry Flags: 00:33:43.660 Duplicate Returned Information: 0 00:33:43.660 Explicit Persistent Connection Support for Discovery: 0 00:33:43.660 Transport Requirements: 00:33:43.660 Secure Channel: Not Specified 00:33:43.660 Port ID: 1 (0x0001) 00:33:43.660 Controller ID: 65535 (0xffff) 00:33:43.660 Admin Max SQ Size: 32 00:33:43.660 Transport Service Identifier: 4420 00:33:43.660 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:43.660 Transport Address: 10.0.0.1 00:33:43.660 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:43.920 get_feature(0x01) failed 00:33:43.920 get_feature(0x02) failed 00:33:43.920 get_feature(0x04) failed 00:33:43.920 ===================================================== 00:33:43.920 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:43.920 ===================================================== 00:33:43.920 Controller Capabilities/Features 00:33:43.920 ================================ 00:33:43.920 Vendor ID: 0000 00:33:43.920 Subsystem Vendor ID: 0000 00:33:43.920 Serial Number: c3a3e3db50277a3c1389 00:33:43.920 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:43.920 Firmware Version: 6.8.9-20 00:33:43.920 Recommended Arb Burst: 6 00:33:43.920 IEEE OUI Identifier: 00 00 00 00:33:43.920 Multi-path I/O 00:33:43.920 May have multiple subsystem ports: Yes 00:33:43.920 May have multiple controllers: Yes 00:33:43.920 Associated with SR-IOV VF: No 00:33:43.920 Max Data Transfer Size: Unlimited 00:33:43.920 Max Number of Namespaces: 1024 00:33:43.920 Max Number of I/O Queues: 128 00:33:43.920 NVMe Specification Version (VS): 1.3 00:33:43.920 NVMe Specification Version (Identify): 1.3 00:33:43.920 Maximum Queue Entries: 1024 00:33:43.920 Contiguous Queues Required: No 00:33:43.921 Arbitration Mechanisms Supported 00:33:43.921 Weighted Round Robin: Not Supported 00:33:43.921 Vendor Specific: Not Supported 00:33:43.921 Reset Timeout: 7500 ms 00:33:43.921 Doorbell Stride: 4 bytes 00:33:43.921 NVM Subsystem Reset: Not Supported 00:33:43.921 Command Sets Supported 00:33:43.921 NVM Command Set: Supported 00:33:43.921 Boot Partition: Not Supported 00:33:43.921 Memory Page Size Minimum: 4096 bytes 00:33:43.921 Memory Page Size Maximum: 4096 bytes 00:33:43.921 Persistent Memory Region: Not Supported 00:33:43.921 Optional Asynchronous Events Supported 00:33:43.921 Namespace Attribute Notices: Supported 00:33:43.921 Firmware Activation Notices: Not Supported 00:33:43.921 ANA Change Notices: Supported 00:33:43.921 PLE Aggregate Log Change Notices: Not Supported 00:33:43.921 LBA Status Info Alert Notices: Not Supported 00:33:43.921 EGE Aggregate Log Change Notices: Not Supported 00:33:43.921 Normal NVM Subsystem Shutdown event: Not Supported 00:33:43.921 Zone Descriptor Change Notices: Not Supported 00:33:43.921 Discovery Log Change Notices: Not Supported 00:33:43.921 Controller Attributes 00:33:43.921 128-bit Host Identifier: Supported 00:33:43.921 Non-Operational Permissive Mode: Not Supported 00:33:43.921 NVM Sets: Not Supported 00:33:43.921 Read Recovery Levels: Not Supported 00:33:43.921 Endurance Groups: Not Supported 00:33:43.921 Predictable Latency Mode: Not Supported 00:33:43.921 Traffic Based Keep ALive: Supported 00:33:43.921 Namespace Granularity: Not Supported 00:33:43.921 SQ Associations: Not Supported 00:33:43.921 UUID List: Not Supported 00:33:43.921 Multi-Domain Subsystem: Not Supported 00:33:43.921 Fixed Capacity Management: Not Supported 00:33:43.921 Variable Capacity Management: Not Supported 00:33:43.921 Delete Endurance Group: Not Supported 00:33:43.921 Delete NVM Set: Not Supported 00:33:43.921 Extended LBA Formats Supported: Not Supported 00:33:43.921 Flexible Data Placement Supported: Not Supported 00:33:43.921 00:33:43.921 Controller Memory Buffer Support 00:33:43.921 ================================ 00:33:43.921 Supported: No 00:33:43.921 00:33:43.921 Persistent Memory Region Support 00:33:43.921 ================================ 00:33:43.921 Supported: No 00:33:43.921 00:33:43.921 Admin Command Set Attributes 00:33:43.921 ============================ 00:33:43.921 Security Send/Receive: Not Supported 00:33:43.921 Format NVM: Not Supported 00:33:43.921 Firmware Activate/Download: Not Supported 00:33:43.921 Namespace Management: Not Supported 00:33:43.921 Device Self-Test: Not Supported 00:33:43.921 Directives: Not Supported 00:33:43.921 NVMe-MI: Not Supported 00:33:43.921 Virtualization Management: Not Supported 00:33:43.921 Doorbell Buffer Config: Not Supported 00:33:43.921 Get LBA Status Capability: Not Supported 00:33:43.921 Command & Feature Lockdown Capability: Not Supported 00:33:43.921 Abort Command Limit: 4 00:33:43.921 Async Event Request Limit: 4 00:33:43.921 Number of Firmware Slots: N/A 00:33:43.921 Firmware Slot 1 Read-Only: N/A 00:33:43.921 Firmware Activation Without Reset: N/A 00:33:43.921 Multiple Update Detection Support: N/A 00:33:43.921 Firmware Update Granularity: No Information Provided 00:33:43.921 Per-Namespace SMART Log: Yes 00:33:43.921 Asymmetric Namespace Access Log Page: Supported 00:33:43.921 ANA Transition Time : 10 sec 00:33:43.921 00:33:43.921 Asymmetric Namespace Access Capabilities 00:33:43.921 ANA Optimized State : Supported 00:33:43.921 ANA Non-Optimized State : Supported 00:33:43.921 ANA Inaccessible State : Supported 00:33:43.921 ANA Persistent Loss State : Supported 00:33:43.921 ANA Change State : Supported 00:33:43.921 ANAGRPID is not changed : No 00:33:43.921 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:43.921 00:33:43.921 ANA Group Identifier Maximum : 128 00:33:43.921 Number of ANA Group Identifiers : 128 00:33:43.921 Max Number of Allowed Namespaces : 1024 00:33:43.921 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:43.921 Command Effects Log Page: Supported 00:33:43.921 Get Log Page Extended Data: Supported 00:33:43.921 Telemetry Log Pages: Not Supported 00:33:43.921 Persistent Event Log Pages: Not Supported 00:33:43.921 Supported Log Pages Log Page: May Support 00:33:43.921 Commands Supported & Effects Log Page: Not Supported 00:33:43.921 Feature Identifiers & Effects Log Page:May Support 00:33:43.921 NVMe-MI Commands & Effects Log Page: May Support 00:33:43.921 Data Area 4 for Telemetry Log: Not Supported 00:33:43.921 Error Log Page Entries Supported: 128 00:33:43.921 Keep Alive: Supported 00:33:43.921 Keep Alive Granularity: 1000 ms 00:33:43.921 00:33:43.921 NVM Command Set Attributes 00:33:43.921 ========================== 00:33:43.921 Submission Queue Entry Size 00:33:43.921 Max: 64 00:33:43.921 Min: 64 00:33:43.921 Completion Queue Entry Size 00:33:43.921 Max: 16 00:33:43.921 Min: 16 00:33:43.921 Number of Namespaces: 1024 00:33:43.921 Compare Command: Not Supported 00:33:43.921 Write Uncorrectable Command: Not Supported 00:33:43.921 Dataset Management Command: Supported 00:33:43.921 Write Zeroes Command: Supported 00:33:43.921 Set Features Save Field: Not Supported 00:33:43.921 Reservations: Not Supported 00:33:43.921 Timestamp: Not Supported 00:33:43.921 Copy: Not Supported 00:33:43.921 Volatile Write Cache: Present 00:33:43.921 Atomic Write Unit (Normal): 1 00:33:43.921 Atomic Write Unit (PFail): 1 00:33:43.921 Atomic Compare & Write Unit: 1 00:33:43.921 Fused Compare & Write: Not Supported 00:33:43.921 Scatter-Gather List 00:33:43.921 SGL Command Set: Supported 00:33:43.921 SGL Keyed: Not Supported 00:33:43.921 SGL Bit Bucket Descriptor: Not Supported 00:33:43.921 SGL Metadata Pointer: Not Supported 00:33:43.921 Oversized SGL: Not Supported 00:33:43.921 SGL Metadata Address: Not Supported 00:33:43.921 SGL Offset: Supported 00:33:43.921 Transport SGL Data Block: Not Supported 00:33:43.921 Replay Protected Memory Block: Not Supported 00:33:43.921 00:33:43.921 Firmware Slot Information 00:33:43.921 ========================= 00:33:43.921 Active slot: 0 00:33:43.921 00:33:43.921 Asymmetric Namespace Access 00:33:43.921 =========================== 00:33:43.921 Change Count : 0 00:33:43.921 Number of ANA Group Descriptors : 1 00:33:43.921 ANA Group Descriptor : 0 00:33:43.921 ANA Group ID : 1 00:33:43.921 Number of NSID Values : 1 00:33:43.921 Change Count : 0 00:33:43.921 ANA State : 1 00:33:43.921 Namespace Identifier : 1 00:33:43.921 00:33:43.921 Commands Supported and Effects 00:33:43.921 ============================== 00:33:43.921 Admin Commands 00:33:43.921 -------------- 00:33:43.921 Get Log Page (02h): Supported 00:33:43.921 Identify (06h): Supported 00:33:43.921 Abort (08h): Supported 00:33:43.921 Set Features (09h): Supported 00:33:43.921 Get Features (0Ah): Supported 00:33:43.921 Asynchronous Event Request (0Ch): Supported 00:33:43.921 Keep Alive (18h): Supported 00:33:43.921 I/O Commands 00:33:43.921 ------------ 00:33:43.921 Flush (00h): Supported 00:33:43.921 Write (01h): Supported LBA-Change 00:33:43.921 Read (02h): Supported 00:33:43.921 Write Zeroes (08h): Supported LBA-Change 00:33:43.921 Dataset Management (09h): Supported 00:33:43.921 00:33:43.921 Error Log 00:33:43.921 ========= 00:33:43.921 Entry: 0 00:33:43.921 Error Count: 0x3 00:33:43.921 Submission Queue Id: 0x0 00:33:43.921 Command Id: 0x5 00:33:43.921 Phase Bit: 0 00:33:43.921 Status Code: 0x2 00:33:43.921 Status Code Type: 0x0 00:33:43.921 Do Not Retry: 1 00:33:43.921 Error Location: 0x28 00:33:43.921 LBA: 0x0 00:33:43.921 Namespace: 0x0 00:33:43.921 Vendor Log Page: 0x0 00:33:43.921 ----------- 00:33:43.921 Entry: 1 00:33:43.921 Error Count: 0x2 00:33:43.921 Submission Queue Id: 0x0 00:33:43.921 Command Id: 0x5 00:33:43.921 Phase Bit: 0 00:33:43.921 Status Code: 0x2 00:33:43.921 Status Code Type: 0x0 00:33:43.921 Do Not Retry: 1 00:33:43.921 Error Location: 0x28 00:33:43.921 LBA: 0x0 00:33:43.921 Namespace: 0x0 00:33:43.921 Vendor Log Page: 0x0 00:33:43.921 ----------- 00:33:43.921 Entry: 2 00:33:43.921 Error Count: 0x1 00:33:43.921 Submission Queue Id: 0x0 00:33:43.921 Command Id: 0x4 00:33:43.921 Phase Bit: 0 00:33:43.921 Status Code: 0x2 00:33:43.921 Status Code Type: 0x0 00:33:43.921 Do Not Retry: 1 00:33:43.921 Error Location: 0x28 00:33:43.921 LBA: 0x0 00:33:43.922 Namespace: 0x0 00:33:43.922 Vendor Log Page: 0x0 00:33:43.922 00:33:43.922 Number of Queues 00:33:43.922 ================ 00:33:43.922 Number of I/O Submission Queues: 128 00:33:43.922 Number of I/O Completion Queues: 128 00:33:43.922 00:33:43.922 ZNS Specific Controller Data 00:33:43.922 ============================ 00:33:43.922 Zone Append Size Limit: 0 00:33:43.922 00:33:43.922 00:33:43.922 Active Namespaces 00:33:43.922 ================= 00:33:43.922 get_feature(0x05) failed 00:33:43.922 Namespace ID:1 00:33:43.922 Command Set Identifier: NVM (00h) 00:33:43.922 Deallocate: Supported 00:33:43.922 Deallocated/Unwritten Error: Not Supported 00:33:43.922 Deallocated Read Value: Unknown 00:33:43.922 Deallocate in Write Zeroes: Not Supported 00:33:43.922 Deallocated Guard Field: 0xFFFF 00:33:43.922 Flush: Supported 00:33:43.922 Reservation: Not Supported 00:33:43.922 Namespace Sharing Capabilities: Multiple Controllers 00:33:43.922 Size (in LBAs): 1953525168 (931GiB) 00:33:43.922 Capacity (in LBAs): 1953525168 (931GiB) 00:33:43.922 Utilization (in LBAs): 1953525168 (931GiB) 00:33:43.922 UUID: ca38f7ad-4bc1-4cda-b080-e5732f0bd4a5 00:33:43.922 Thin Provisioning: Not Supported 00:33:43.922 Per-NS Atomic Units: Yes 00:33:43.922 Atomic Boundary Size (Normal): 0 00:33:43.922 Atomic Boundary Size (PFail): 0 00:33:43.922 Atomic Boundary Offset: 0 00:33:43.922 NGUID/EUI64 Never Reused: No 00:33:43.922 ANA group ID: 1 00:33:43.922 Namespace Write Protected: No 00:33:43.922 Number of LBA Formats: 1 00:33:43.922 Current LBA Format: LBA Format #00 00:33:43.922 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:43.922 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:43.922 rmmod nvme_tcp 00:33:43.922 rmmod nvme_fabrics 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.922 10:08:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.828 10:08:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:45.828 10:08:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:45.828 10:08:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:45.828 10:08:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:33:45.828 10:08:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:45.828 10:08:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:45.828 10:08:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:45.828 10:08:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:45.828 10:08:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:33:45.828 10:08:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:33:46.087 10:08:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:48.619 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:48.619 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:49.557 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:49.557 00:33:49.557 real 0m15.514s 00:33:49.557 user 0m3.837s 00:33:49.557 sys 0m8.005s 00:33:49.557 10:08:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:49.557 10:08:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:49.557 ************************************ 00:33:49.557 END TEST nvmf_identify_kernel_target 00:33:49.557 ************************************ 00:33:49.557 10:08:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:49.557 10:08:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:49.557 10:08:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:49.557 10:08:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.557 ************************************ 00:33:49.557 START TEST nvmf_auth_host 00:33:49.557 ************************************ 00:33:49.557 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:49.557 * Looking for test storage... 00:33:49.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:49.557 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:49.557 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:33:49.557 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:49.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.817 --rc genhtml_branch_coverage=1 00:33:49.817 --rc genhtml_function_coverage=1 00:33:49.817 --rc genhtml_legend=1 00:33:49.817 --rc geninfo_all_blocks=1 00:33:49.817 --rc geninfo_unexecuted_blocks=1 00:33:49.817 00:33:49.817 ' 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:49.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.817 --rc genhtml_branch_coverage=1 00:33:49.817 --rc genhtml_function_coverage=1 00:33:49.817 --rc genhtml_legend=1 00:33:49.817 --rc geninfo_all_blocks=1 00:33:49.817 --rc geninfo_unexecuted_blocks=1 00:33:49.817 00:33:49.817 ' 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:49.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.817 --rc genhtml_branch_coverage=1 00:33:49.817 --rc genhtml_function_coverage=1 00:33:49.817 --rc genhtml_legend=1 00:33:49.817 --rc geninfo_all_blocks=1 00:33:49.817 --rc geninfo_unexecuted_blocks=1 00:33:49.817 00:33:49.817 ' 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:49.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.817 --rc genhtml_branch_coverage=1 00:33:49.817 --rc genhtml_function_coverage=1 00:33:49.817 --rc genhtml_legend=1 00:33:49.817 --rc geninfo_all_blocks=1 00:33:49.817 --rc geninfo_unexecuted_blocks=1 00:33:49.817 00:33:49.817 ' 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:49.817 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:49.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:49.818 10:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:55.077 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:55.078 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:55.078 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:55.078 Found net devices under 0000:86:00.0: cvl_0_0 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:55.078 Found net devices under 0000:86:00.1: cvl_0_1 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:55.078 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:55.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:55.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:33:55.337 00:33:55.337 --- 10.0.0.2 ping statistics --- 00:33:55.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.337 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:55.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:55.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:33:55.337 00:33:55.337 --- 10.0.0.1 ping statistics --- 00:33:55.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.337 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=1443587 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 1443587 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1443587 ']' 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:55.337 10:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=3faa9fb48c5f4e7778a3218985058380 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.5Bk 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 3faa9fb48c5f4e7778a3218985058380 0 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 3faa9fb48c5f4e7778a3218985058380 0 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=3faa9fb48c5f4e7778a3218985058380 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.5Bk 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.5Bk 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5Bk 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7848345a76530f504aa3d0d4cb6aac4826a600e75aead6ee1864f2a8878cb0e2 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.Glj 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7848345a76530f504aa3d0d4cb6aac4826a600e75aead6ee1864f2a8878cb0e2 3 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7848345a76530f504aa3d0d4cb6aac4826a600e75aead6ee1864f2a8878cb0e2 3 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7848345a76530f504aa3d0d4cb6aac4826a600e75aead6ee1864f2a8878cb0e2 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.Glj 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.Glj 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Glj 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=31d8b99b9a5ce72c6837c0e05da2f15362505a1db1d0677d 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.Nk9 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 31d8b99b9a5ce72c6837c0e05da2f15362505a1db1d0677d 0 00:33:55.597 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 31d8b99b9a5ce72c6837c0e05da2f15362505a1db1d0677d 0 00:33:55.598 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:33:55.598 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:33:55.598 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=31d8b99b9a5ce72c6837c0e05da2f15362505a1db1d0677d 00:33:55.598 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:33:55.598 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.Nk9 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.Nk9 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Nk9 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ca27059777b88643546f670c6033a023d92bdf74e55989e9 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.3ii 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ca27059777b88643546f670c6033a023d92bdf74e55989e9 2 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ca27059777b88643546f670c6033a023d92bdf74e55989e9 2 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ca27059777b88643546f670c6033a023d92bdf74e55989e9 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.3ii 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.3ii 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.3ii 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=2fe6908f278d6d2881de7343e32b0268 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.mLN 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 2fe6908f278d6d2881de7343e32b0268 1 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 2fe6908f278d6d2881de7343e32b0268 1 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=2fe6908f278d6d2881de7343e32b0268 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.mLN 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.mLN 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.mLN 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=44b25c8259f9de40bd3a316accd77f65 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.EbN 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 44b25c8259f9de40bd3a316accd77f65 1 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 44b25c8259f9de40bd3a316accd77f65 1 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:33:55.857 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=44b25c8259f9de40bd3a316accd77f65 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.EbN 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.EbN 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.EbN 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=bd6fc364e12daac9b7d0212f8fe4402263819d04054c11ac 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.tzl 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key bd6fc364e12daac9b7d0212f8fe4402263819d04054c11ac 2 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 bd6fc364e12daac9b7d0212f8fe4402263819d04054c11ac 2 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=bd6fc364e12daac9b7d0212f8fe4402263819d04054c11ac 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.tzl 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.tzl 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.tzl 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:33:55.858 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:56.116 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=24c9d144fa4813381878e14f8d282825 00:33:56.116 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:33:56.116 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.6VG 00:33:56.116 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 24c9d144fa4813381878e14f8d282825 0 00:33:56.116 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 24c9d144fa4813381878e14f8d282825 0 00:33:56.116 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:33:56.116 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:33:56.116 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=24c9d144fa4813381878e14f8d282825 00:33:56.116 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:33:56.116 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.6VG 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.6VG 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.6VG 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=bf362ccc9b2d45936f67631651ff5339e4675d06cd66e3ec3020fa4bf580dc80 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.yAv 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key bf362ccc9b2d45936f67631651ff5339e4675d06cd66e3ec3020fa4bf580dc80 3 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 bf362ccc9b2d45936f67631651ff5339e4675d06cd66e3ec3020fa4bf580dc80 3 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=bf362ccc9b2d45936f67631651ff5339e4675d06cd66e3ec3020fa4bf580dc80 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.yAv 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.yAv 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.yAv 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1443587 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1443587 ']' 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:56.117 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.375 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:56.375 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:56.375 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:56.375 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5Bk 00:33:56.375 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.375 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.375 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Glj ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Glj 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Nk9 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.3ii ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3ii 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.mLN 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.EbN ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EbN 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.tzl 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.6VG ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.6VG 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.yAv 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:33:56.376 10:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:33:56.376 10:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:56.376 10:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:58.897 Waiting for block devices as requested 00:33:58.897 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:59.154 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:59.154 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:59.412 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:59.412 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:59.412 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:59.412 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:59.669 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:59.669 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:59.669 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:59.669 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:59.926 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:59.926 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:59.926 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:59.926 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:00.184 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:00.184 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:00.750 No valid GPT data, bailing 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:00.750 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:01.007 00:34:01.007 Discovery Log Number of Records 2, Generation counter 2 00:34:01.007 =====Discovery Log Entry 0====== 00:34:01.007 trtype: tcp 00:34:01.007 adrfam: ipv4 00:34:01.007 subtype: current discovery subsystem 00:34:01.007 treq: not specified, sq flow control disable supported 00:34:01.007 portid: 1 00:34:01.007 trsvcid: 4420 00:34:01.007 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:01.007 traddr: 10.0.0.1 00:34:01.007 eflags: none 00:34:01.007 sectype: none 00:34:01.007 =====Discovery Log Entry 1====== 00:34:01.007 trtype: tcp 00:34:01.007 adrfam: ipv4 00:34:01.007 subtype: nvme subsystem 00:34:01.007 treq: not specified, sq flow control disable supported 00:34:01.007 portid: 1 00:34:01.007 trsvcid: 4420 00:34:01.007 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:01.007 traddr: 10.0.0.1 00:34:01.007 eflags: none 00:34:01.007 sectype: none 00:34:01.007 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:01.007 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:01.007 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:01.007 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:01.007 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.007 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.007 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:01.007 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:01.007 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:01.007 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.008 nvme0n1 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.008 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.265 nvme0n1 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.265 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.523 10:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.523 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.523 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.523 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:01.523 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:01.523 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:01.523 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.524 nvme0n1 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.524 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.782 nvme0n1 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:01.782 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:01.783 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:01.783 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.783 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.783 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:01.783 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.783 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:01.783 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:01.783 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:01.783 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:01.783 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.783 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.041 nvme0n1 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.041 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.300 nvme0n1 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.301 10:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.560 nvme0n1 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.560 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.819 nvme0n1 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.819 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.078 nvme0n1 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.078 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.337 nvme0n1 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.337 10:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.596 nvme0n1 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.596 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.855 nvme0n1 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.855 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.113 nvme0n1 00:34:04.113 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.113 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.113 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.113 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.113 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.114 10:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.373 nvme0n1 00:34:04.373 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.373 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.373 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.373 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.373 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.373 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.373 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.373 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.373 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.373 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:04.631 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:04.632 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.632 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.632 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:04.632 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.632 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:04.632 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:04.632 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:04.632 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:04.632 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.632 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.890 nvme0n1 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.890 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.148 nvme0n1 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.148 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.149 10:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.715 nvme0n1 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.715 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.975 nvme0n1 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.975 10:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.544 nvme0n1 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.544 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.110 nvme0n1 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:07.110 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:07.111 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:07.111 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.111 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.369 nvme0n1 00:34:07.369 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.369 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.369 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.369 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.369 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.369 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.369 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.369 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.369 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.369 10:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.369 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.935 nvme0n1 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.935 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.936 10:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.867 nvme0n1 00:34:08.867 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.868 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.432 nvme0n1 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.432 10:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.999 nvme0n1 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.999 10:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.566 nvme0n1 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.566 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.825 nvme0n1 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.825 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.084 nvme0n1 00:34:11.084 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.084 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.084 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.084 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.084 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.084 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.084 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.084 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.085 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.343 nvme0n1 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.343 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.344 10:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.344 nvme0n1 00:34:11.344 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.344 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.344 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.344 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.344 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.344 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.603 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.604 nvme0n1 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.604 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.863 nvme0n1 00:34:11.863 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.863 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.863 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.863 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.863 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.863 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.863 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.863 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.863 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.863 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.863 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.863 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.864 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.122 nvme0n1 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.122 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.123 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.382 nvme0n1 00:34:12.382 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.382 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.382 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.382 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.382 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.382 10:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.382 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.641 nvme0n1 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.641 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.900 nvme0n1 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.900 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.159 nvme0n1 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.418 nvme0n1 00:34:13.418 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.418 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.418 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.418 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.418 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.418 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.418 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.418 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.418 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.418 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.677 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.936 nvme0n1 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.936 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.195 nvme0n1 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.195 10:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.455 nvme0n1 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.455 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.022 nvme0n1 00:34:15.022 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.022 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.022 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.022 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.022 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.022 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.022 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.022 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.022 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.022 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.023 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.281 nvme0n1 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.281 10:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.849 nvme0n1 00:34:15.849 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.850 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.108 nvme0n1 00:34:16.108 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.108 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.108 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.108 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.108 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.108 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.367 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.368 10:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.625 nvme0n1 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.625 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.186 nvme0n1 00:34:17.187 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.187 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.187 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.187 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.187 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.187 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.444 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.444 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.444 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.444 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.444 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.444 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.444 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:17.444 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.444 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.445 10:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.012 nvme0n1 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.012 10:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.580 nvme0n1 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.580 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.147 nvme0n1 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:19.147 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:19.406 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.406 10:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.973 nvme0n1 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:19.973 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.974 nvme0n1 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.974 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:20.232 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.233 nvme0n1 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.233 10:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.492 nvme0n1 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.492 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:20.493 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.493 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:20.493 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:20.493 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:20.493 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.493 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.493 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.751 nvme0n1 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.751 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.010 nvme0n1 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.010 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.270 nvme0n1 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.270 nvme0n1 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.270 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.528 10:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.528 nvme0n1 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.528 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.786 nvme0n1 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.786 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.043 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.043 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.044 nvme0n1 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.044 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.301 10:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.558 nvme0n1 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.558 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:22.559 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:22.559 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:22.559 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.559 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.559 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:22.559 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.559 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:22.559 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:22.559 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:22.559 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.559 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.559 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.816 nvme0n1 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.816 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.074 nvme0n1 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.074 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.352 nvme0n1 00:34:23.352 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.352 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.352 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.352 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.352 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.352 10:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.352 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.609 nvme0n1 00:34:23.609 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.609 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.609 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.609 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.609 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.609 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.867 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.867 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.867 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.867 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.867 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.867 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.868 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.125 nvme0n1 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.125 10:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.692 nvme0n1 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.692 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.951 nvme0n1 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.951 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.209 10:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.467 nvme0n1 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.467 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.035 nvme0n1 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZhYTlmYjQ4YzVmNGU3Nzc4YTMyMTg5ODUwNTgzODCi+RoJ: 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: ]] 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzg0ODM0NWE3NjUzMGY1MDRhYTNkMGQ0Y2I2YWFjNDgyNmE2MDBlNzVhZWFkNmVlMTg2NGYyYTg4NzhjYjBlMo3FSNg=: 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.035 10:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.607 nvme0n1 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.607 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.608 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.175 nvme0n1 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:27.175 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.176 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:27.176 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:27.176 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:27.176 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.176 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.176 10:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.740 nvme0n1 00:34:27.740 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.740 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.740 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.740 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.740 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.740 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.740 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.740 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.740 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.740 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmQ2ZmMzNjRlMTJkYWFjOWI3ZDAyMTJmOGZlNDQwMjI2MzgxOWQwNDA1NGMxMWFjTUE2rQ==: 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: ]] 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRjOWQxNDRmYTQ4MTMzODE4NzhlMTRmOGQyODI4MjXFAEPk: 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.998 10:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.563 nvme0n1 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzNjJjY2M5YjJkNDU5MzZmNjc2MzE2NTFmZjUzMzllNDY3NWQwNmNkNjZlM2VjMzAyMGZhNGJmNTgwZGM4MNxGiI8=: 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.563 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.129 nvme0n1 00:34:29.129 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.129 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.129 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.129 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.129 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.130 request: 00:34:29.130 { 00:34:29.130 "name": "nvme0", 00:34:29.130 "trtype": "tcp", 00:34:29.130 "traddr": "10.0.0.1", 00:34:29.130 "adrfam": "ipv4", 00:34:29.130 "trsvcid": "4420", 00:34:29.130 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:29.130 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:29.130 "prchk_reftag": false, 00:34:29.130 "prchk_guard": false, 00:34:29.130 "hdgst": false, 00:34:29.130 "ddgst": false, 00:34:29.130 "allow_unrecognized_csi": false, 00:34:29.130 "method": "bdev_nvme_attach_controller", 00:34:29.130 "req_id": 1 00:34:29.130 } 00:34:29.130 Got JSON-RPC error response 00:34:29.130 response: 00:34:29.130 { 00:34:29.130 "code": -5, 00:34:29.130 "message": "Input/output error" 00:34:29.130 } 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.130 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.389 request: 00:34:29.389 { 00:34:29.389 "name": "nvme0", 00:34:29.389 "trtype": "tcp", 00:34:29.389 "traddr": "10.0.0.1", 00:34:29.389 "adrfam": "ipv4", 00:34:29.389 "trsvcid": "4420", 00:34:29.389 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:29.389 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:29.389 "prchk_reftag": false, 00:34:29.389 "prchk_guard": false, 00:34:29.389 "hdgst": false, 00:34:29.389 "ddgst": false, 00:34:29.389 "dhchap_key": "key2", 00:34:29.389 "allow_unrecognized_csi": false, 00:34:29.389 "method": "bdev_nvme_attach_controller", 00:34:29.389 "req_id": 1 00:34:29.389 } 00:34:29.389 Got JSON-RPC error response 00:34:29.389 response: 00:34:29.389 { 00:34:29.389 "code": -5, 00:34:29.389 "message": "Input/output error" 00:34:29.389 } 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:29.389 10:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.389 request: 00:34:29.389 { 00:34:29.389 "name": "nvme0", 00:34:29.389 "trtype": "tcp", 00:34:29.389 "traddr": "10.0.0.1", 00:34:29.389 "adrfam": "ipv4", 00:34:29.389 "trsvcid": "4420", 00:34:29.389 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:29.389 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:29.389 "prchk_reftag": false, 00:34:29.389 "prchk_guard": false, 00:34:29.389 "hdgst": false, 00:34:29.389 "ddgst": false, 00:34:29.389 "dhchap_key": "key1", 00:34:29.389 "dhchap_ctrlr_key": "ckey2", 00:34:29.389 "allow_unrecognized_csi": false, 00:34:29.389 "method": "bdev_nvme_attach_controller", 00:34:29.389 "req_id": 1 00:34:29.389 } 00:34:29.389 Got JSON-RPC error response 00:34:29.389 response: 00:34:29.389 { 00:34:29.389 "code": -5, 00:34:29.389 "message": "Input/output error" 00:34:29.389 } 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.389 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.647 nvme0n1 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.647 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.905 request: 00:34:29.905 { 00:34:29.905 "name": "nvme0", 00:34:29.905 "dhchap_key": "key1", 00:34:29.905 "dhchap_ctrlr_key": "ckey2", 00:34:29.905 "method": "bdev_nvme_set_keys", 00:34:29.905 "req_id": 1 00:34:29.905 } 00:34:29.905 Got JSON-RPC error response 00:34:29.905 response: 00:34:29.905 { 00:34:29.905 "code": -13, 00:34:29.905 "message": "Permission denied" 00:34:29.905 } 00:34:29.905 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:29.905 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:29.905 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:29.905 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:29.906 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:29.906 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.906 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:29.906 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.906 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.906 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.906 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:29.906 10:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:30.840 10:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.840 10:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:30.840 10:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.840 10:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.840 10:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.840 10:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:30.840 10:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:31.773 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:31.773 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.773 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.773 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.773 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzFkOGI5OWI5YTVjZTcyYzY4MzdjMGUwNWRhMmYxNTM2MjUwNWExZGIxZDA2NzdkpMC8/w==: 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: ]] 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2EyNzA1OTc3N2I4ODY0MzU0NmY2NzBjNjAzM2EwMjNkOTJiZGY3NGU1NTk4OWU5u+pNBw==: 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.032 nvme0n1 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZlNjkwOGYyNzhkNmQyODgxZGU3MzQzZTMyYjAyNjhLOUKU: 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: ]] 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDRiMjVjODI1OWY5ZGU0MGJkM2EzMTZhY2NkNzdmNjVBG2Ob: 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.032 request: 00:34:32.032 { 00:34:32.032 "name": "nvme0", 00:34:32.032 "dhchap_key": "key2", 00:34:32.032 "dhchap_ctrlr_key": "ckey1", 00:34:32.032 "method": "bdev_nvme_set_keys", 00:34:32.032 "req_id": 1 00:34:32.032 } 00:34:32.032 Got JSON-RPC error response 00:34:32.032 response: 00:34:32.032 { 00:34:32.032 "code": -13, 00:34:32.032 "message": "Permission denied" 00:34:32.032 } 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:32.032 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:32.033 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:32.033 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:32.033 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:32.033 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.033 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:32.033 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.033 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.291 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.291 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:32.291 10:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:33.237 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.237 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:33.237 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.237 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.237 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.237 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:33.238 rmmod nvme_tcp 00:34:33.238 rmmod nvme_fabrics 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 1443587 ']' 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 1443587 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1443587 ']' 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1443587 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1443587 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1443587' 00:34:33.238 killing process with pid 1443587 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1443587 00:34:33.238 10:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1443587 00:34:33.497 10:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:33.497 10:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:33.497 10:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:33.497 10:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:33.497 10:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:34:33.497 10:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:33.497 10:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:34:33.497 10:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:33.497 10:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:33.497 10:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.497 10:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:33.497 10:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:34:36.030 10:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:38.070 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:38.070 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:38.070 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:38.070 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:38.070 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:38.070 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:38.070 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:38.070 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:38.070 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:38.070 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:38.070 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:38.328 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:38.328 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:38.328 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:38.328 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:38.328 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:39.260 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:39.260 10:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5Bk /tmp/spdk.key-null.Nk9 /tmp/spdk.key-sha256.mLN /tmp/spdk.key-sha384.tzl /tmp/spdk.key-sha512.yAv /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:39.260 10:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:41.794 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:41.794 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:41.795 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:41.795 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:41.795 00:34:41.795 real 0m52.055s 00:34:41.795 user 0m46.772s 00:34:41.795 sys 0m11.758s 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.795 ************************************ 00:34:41.795 END TEST nvmf_auth_host 00:34:41.795 ************************************ 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.795 ************************************ 00:34:41.795 START TEST nvmf_digest 00:34:41.795 ************************************ 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:41.795 * Looking for test storage... 00:34:41.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:41.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.795 --rc genhtml_branch_coverage=1 00:34:41.795 --rc genhtml_function_coverage=1 00:34:41.795 --rc genhtml_legend=1 00:34:41.795 --rc geninfo_all_blocks=1 00:34:41.795 --rc geninfo_unexecuted_blocks=1 00:34:41.795 00:34:41.795 ' 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:41.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.795 --rc genhtml_branch_coverage=1 00:34:41.795 --rc genhtml_function_coverage=1 00:34:41.795 --rc genhtml_legend=1 00:34:41.795 --rc geninfo_all_blocks=1 00:34:41.795 --rc geninfo_unexecuted_blocks=1 00:34:41.795 00:34:41.795 ' 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:41.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.795 --rc genhtml_branch_coverage=1 00:34:41.795 --rc genhtml_function_coverage=1 00:34:41.795 --rc genhtml_legend=1 00:34:41.795 --rc geninfo_all_blocks=1 00:34:41.795 --rc geninfo_unexecuted_blocks=1 00:34:41.795 00:34:41.795 ' 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:41.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.795 --rc genhtml_branch_coverage=1 00:34:41.795 --rc genhtml_function_coverage=1 00:34:41.795 --rc genhtml_legend=1 00:34:41.795 --rc geninfo_all_blocks=1 00:34:41.795 --rc geninfo_unexecuted_blocks=1 00:34:41.795 00:34:41.795 ' 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.795 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:41.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:41.796 10:09:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:47.059 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:47.059 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.059 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:47.060 Found net devices under 0000:86:00.0: cvl_0_0 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:47.060 Found net devices under 0000:86:00.1: cvl_0_1 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.060 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:47.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:34:47.318 00:34:47.318 --- 10.0.0.2 ping statistics --- 00:34:47.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.318 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:34:47.318 00:34:47.318 --- 10.0.0.1 ping statistics --- 00:34:47.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.318 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:47.318 ************************************ 00:34:47.318 START TEST nvmf_digest_clean 00:34:47.318 ************************************ 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:47.318 10:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=1457635 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 1457635 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1457635 ']' 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:47.318 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:47.576 [2024-12-07 10:09:16.058774] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:34:47.576 [2024-12-07 10:09:16.058821] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.576 [2024-12-07 10:09:16.119221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.576 [2024-12-07 10:09:16.162239] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.577 [2024-12-07 10:09:16.162279] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.577 [2024-12-07 10:09:16.162286] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:47.577 [2024-12-07 10:09:16.162293] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:47.577 [2024-12-07 10:09:16.162299] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.577 [2024-12-07 10:09:16.162317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.577 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:47.577 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:47.577 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:47.577 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:47.577 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:47.577 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.577 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:47.577 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:47.577 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:47.577 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.577 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:47.835 null0 00:34:47.835 [2024-12-07 10:09:16.333457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:47.835 [2024-12-07 10:09:16.357654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1457676 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1457676 /var/tmp/bperf.sock 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1457676 ']' 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:47.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:47.835 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:47.835 [2024-12-07 10:09:16.410029] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:34:47.835 [2024-12-07 10:09:16.410072] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457676 ] 00:34:47.835 [2024-12-07 10:09:16.463469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.836 [2024-12-07 10:09:16.505966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.094 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:48.094 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:48.094 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:48.094 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:48.094 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:48.353 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:48.353 10:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:48.611 nvme0n1 00:34:48.611 10:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:48.611 10:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:48.611 Running I/O for 2 seconds... 00:34:50.922 24864.00 IOPS, 97.12 MiB/s [2024-12-07T09:09:19.648Z] 24358.00 IOPS, 95.15 MiB/s 00:34:50.922 Latency(us) 00:34:50.922 [2024-12-07T09:09:19.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.922 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:50.922 nvme0n1 : 2.04 23897.54 93.35 0.00 0.00 5245.97 2550.21 46730.02 00:34:50.922 [2024-12-07T09:09:19.648Z] =================================================================================================================== 00:34:50.922 [2024-12-07T09:09:19.648Z] Total : 23897.54 93.35 0.00 0.00 5245.97 2550.21 46730.02 00:34:50.922 { 00:34:50.922 "results": [ 00:34:50.922 { 00:34:50.922 "job": "nvme0n1", 00:34:50.922 "core_mask": "0x2", 00:34:50.922 "workload": "randread", 00:34:50.922 "status": "finished", 00:34:50.922 "queue_depth": 128, 00:34:50.922 "io_size": 4096, 00:34:50.922 "runtime": 2.043892, 00:34:50.922 "iops": 23897.54448865204, 00:34:50.922 "mibps": 93.34978315879704, 00:34:50.922 "io_failed": 0, 00:34:50.922 "io_timeout": 0, 00:34:50.922 "avg_latency_us": 5245.974317383115, 00:34:50.922 "min_latency_us": 2550.2052173913044, 00:34:50.922 "max_latency_us": 46730.01739130435 00:34:50.922 } 00:34:50.922 ], 00:34:50.922 "core_count": 1 00:34:50.922 } 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:50.922 | select(.opcode=="crc32c") 00:34:50.922 | "\(.module_name) \(.executed)"' 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1457676 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1457676 ']' 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1457676 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1457676 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1457676' 00:34:50.922 killing process with pid 1457676 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1457676 00:34:50.922 Received shutdown signal, test time was about 2.000000 seconds 00:34:50.922 00:34:50.922 Latency(us) 00:34:50.922 [2024-12-07T09:09:19.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.922 [2024-12-07T09:09:19.648Z] =================================================================================================================== 00:34:50.922 [2024-12-07T09:09:19.648Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:50.922 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1457676 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1458351 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1458351 /var/tmp/bperf.sock 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1458351 ']' 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:51.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:51.180 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:51.180 [2024-12-07 10:09:19.833165] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:34:51.180 [2024-12-07 10:09:19.833227] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458351 ] 00:34:51.180 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:51.180 Zero copy mechanism will not be used. 00:34:51.180 [2024-12-07 10:09:19.887568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.438 [2024-12-07 10:09:19.930079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.438 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:51.438 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:51.438 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:51.438 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:51.438 10:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:51.696 10:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.696 10:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.954 nvme0n1 00:34:51.954 10:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:51.954 10:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:51.954 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:51.954 Zero copy mechanism will not be used. 00:34:51.954 Running I/O for 2 seconds... 00:34:54.267 4886.00 IOPS, 610.75 MiB/s [2024-12-07T09:09:22.993Z] 4980.50 IOPS, 622.56 MiB/s 00:34:54.267 Latency(us) 00:34:54.267 [2024-12-07T09:09:22.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.267 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:54.267 nvme0n1 : 2.00 4979.88 622.49 0.00 0.00 3210.16 487.96 8149.26 00:34:54.267 [2024-12-07T09:09:22.993Z] =================================================================================================================== 00:34:54.267 [2024-12-07T09:09:22.993Z] Total : 4979.88 622.49 0.00 0.00 3210.16 487.96 8149.26 00:34:54.267 { 00:34:54.267 "results": [ 00:34:54.267 { 00:34:54.267 "job": "nvme0n1", 00:34:54.267 "core_mask": "0x2", 00:34:54.267 "workload": "randread", 00:34:54.267 "status": "finished", 00:34:54.267 "queue_depth": 16, 00:34:54.267 "io_size": 131072, 00:34:54.267 "runtime": 2.00346, 00:34:54.267 "iops": 4979.884799297216, 00:34:54.267 "mibps": 622.485599912152, 00:34:54.267 "io_failed": 0, 00:34:54.267 "io_timeout": 0, 00:34:54.267 "avg_latency_us": 3210.164919488737, 00:34:54.267 "min_latency_us": 487.9582608695652, 00:34:54.267 "max_latency_us": 8149.2591304347825 00:34:54.267 } 00:34:54.267 ], 00:34:54.267 "core_count": 1 00:34:54.267 } 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:54.267 | select(.opcode=="crc32c") 00:34:54.267 | "\(.module_name) \(.executed)"' 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1458351 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1458351 ']' 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1458351 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1458351 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1458351' 00:34:54.267 killing process with pid 1458351 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1458351 00:34:54.267 Received shutdown signal, test time was about 2.000000 seconds 00:34:54.267 00:34:54.267 Latency(us) 00:34:54.267 [2024-12-07T09:09:22.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.267 [2024-12-07T09:09:22.993Z] =================================================================================================================== 00:34:54.267 [2024-12-07T09:09:22.993Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:54.267 10:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1458351 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1458822 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1458822 /var/tmp/bperf.sock 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1458822 ']' 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:54.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:54.526 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:54.526 [2024-12-07 10:09:23.140265] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:34:54.526 [2024-12-07 10:09:23.140314] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458822 ] 00:34:54.526 [2024-12-07 10:09:23.195078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.526 [2024-12-07 10:09:23.232414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.785 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:54.785 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:54.785 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:54.785 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:54.785 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:55.043 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:55.043 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:55.302 nvme0n1 00:34:55.302 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:55.302 10:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:55.302 Running I/O for 2 seconds... 00:34:57.611 27808.00 IOPS, 108.62 MiB/s [2024-12-07T09:09:26.337Z] 27816.00 IOPS, 108.66 MiB/s 00:34:57.611 Latency(us) 00:34:57.611 [2024-12-07T09:09:26.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.611 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:57.611 nvme0n1 : 2.01 27823.65 108.69 0.00 0.00 4593.81 1923.34 7351.43 00:34:57.611 [2024-12-07T09:09:26.337Z] =================================================================================================================== 00:34:57.611 [2024-12-07T09:09:26.337Z] Total : 27823.65 108.69 0.00 0.00 4593.81 1923.34 7351.43 00:34:57.611 { 00:34:57.611 "results": [ 00:34:57.611 { 00:34:57.611 "job": "nvme0n1", 00:34:57.611 "core_mask": "0x2", 00:34:57.611 "workload": "randwrite", 00:34:57.611 "status": "finished", 00:34:57.611 "queue_depth": 128, 00:34:57.611 "io_size": 4096, 00:34:57.611 "runtime": 2.006387, 00:34:57.611 "iops": 27823.645189088646, 00:34:57.611 "mibps": 108.68611401987752, 00:34:57.611 "io_failed": 0, 00:34:57.611 "io_timeout": 0, 00:34:57.611 "avg_latency_us": 4593.81099152242, 00:34:57.611 "min_latency_us": 1923.3391304347826, 00:34:57.611 "max_latency_us": 7351.429565217391 00:34:57.611 } 00:34:57.611 ], 00:34:57.611 "core_count": 1 00:34:57.611 } 00:34:57.611 10:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:57.611 10:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:57.611 10:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:57.611 10:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:57.611 | select(.opcode=="crc32c") 00:34:57.611 | "\(.module_name) \(.executed)"' 00:34:57.612 10:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1458822 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1458822 ']' 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1458822 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1458822 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1458822' 00:34:57.612 killing process with pid 1458822 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1458822 00:34:57.612 Received shutdown signal, test time was about 2.000000 seconds 00:34:57.612 00:34:57.612 Latency(us) 00:34:57.612 [2024-12-07T09:09:26.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.612 [2024-12-07T09:09:26.338Z] =================================================================================================================== 00:34:57.612 [2024-12-07T09:09:26.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.612 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1458822 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1459301 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1459301 /var/tmp/bperf.sock 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1459301 ']' 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:57.871 [2024-12-07 10:09:26.415684] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:34:57.871 [2024-12-07 10:09:26.415733] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459301 ] 00:34:57.871 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:57.871 Zero copy mechanism will not be used. 00:34:57.871 [2024-12-07 10:09:26.469061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.871 [2024-12-07 10:09:26.510177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:57.871 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:58.130 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.130 10:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.388 nvme0n1 00:34:58.646 10:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:58.646 10:09:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:58.646 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:58.646 Zero copy mechanism will not be used. 00:34:58.646 Running I/O for 2 seconds... 00:35:00.516 6001.00 IOPS, 750.12 MiB/s [2024-12-07T09:09:29.242Z] 6415.50 IOPS, 801.94 MiB/s 00:35:00.516 Latency(us) 00:35:00.516 [2024-12-07T09:09:29.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.516 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:00.516 nvme0n1 : 2.00 6413.53 801.69 0.00 0.00 2490.80 1823.61 10086.85 00:35:00.516 [2024-12-07T09:09:29.242Z] =================================================================================================================== 00:35:00.516 [2024-12-07T09:09:29.242Z] Total : 6413.53 801.69 0.00 0.00 2490.80 1823.61 10086.85 00:35:00.516 { 00:35:00.516 "results": [ 00:35:00.516 { 00:35:00.516 "job": "nvme0n1", 00:35:00.516 "core_mask": "0x2", 00:35:00.516 "workload": "randwrite", 00:35:00.516 "status": "finished", 00:35:00.516 "queue_depth": 16, 00:35:00.516 "io_size": 131072, 00:35:00.516 "runtime": 2.003108, 00:35:00.516 "iops": 6413.533369144349, 00:35:00.516 "mibps": 801.6916711430437, 00:35:00.516 "io_failed": 0, 00:35:00.516 "io_timeout": 0, 00:35:00.516 "avg_latency_us": 2490.800387977569, 00:35:00.516 "min_latency_us": 1823.6104347826088, 00:35:00.516 "max_latency_us": 10086.845217391305 00:35:00.516 } 00:35:00.516 ], 00:35:00.516 "core_count": 1 00:35:00.516 } 00:35:00.516 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:00.775 | select(.opcode=="crc32c") 00:35:00.775 | "\(.module_name) \(.executed)"' 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1459301 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1459301 ']' 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1459301 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1459301 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1459301' 00:35:00.775 killing process with pid 1459301 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1459301 00:35:00.775 Received shutdown signal, test time was about 2.000000 seconds 00:35:00.775 00:35:00.775 Latency(us) 00:35:00.775 [2024-12-07T09:09:29.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.775 [2024-12-07T09:09:29.501Z] =================================================================================================================== 00:35:00.775 [2024-12-07T09:09:29.501Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:00.775 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1459301 00:35:01.034 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1457635 00:35:01.034 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1457635 ']' 00:35:01.034 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1457635 00:35:01.034 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:01.034 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:01.034 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1457635 00:35:01.034 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:01.034 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:01.034 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1457635' 00:35:01.034 killing process with pid 1457635 00:35:01.034 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1457635 00:35:01.034 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1457635 00:35:01.292 00:35:01.292 real 0m13.896s 00:35:01.292 user 0m26.524s 00:35:01.292 sys 0m4.534s 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:01.292 ************************************ 00:35:01.292 END TEST nvmf_digest_clean 00:35:01.292 ************************************ 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:01.292 ************************************ 00:35:01.292 START TEST nvmf_digest_error 00:35:01.292 ************************************ 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=1460009 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 1460009 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1460009 ']' 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:01.292 10:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.292 [2024-12-07 10:09:30.004330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:01.292 [2024-12-07 10:09:30.004374] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.550 [2024-12-07 10:09:30.065043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.550 [2024-12-07 10:09:30.117790] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.551 [2024-12-07 10:09:30.117826] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.551 [2024-12-07 10:09:30.117834] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.551 [2024-12-07 10:09:30.117840] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.551 [2024-12-07 10:09:30.117845] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.551 [2024-12-07 10:09:30.117863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.551 [2024-12-07 10:09:30.218401] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.551 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.809 null0 00:35:01.810 [2024-12-07 10:09:30.305495] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:01.810 [2024-12-07 10:09:30.329703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1460032 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1460032 /var/tmp/bperf.sock 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1460032 ']' 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:01.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:01.810 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.810 [2024-12-07 10:09:30.371459] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:01.810 [2024-12-07 10:09:30.371501] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460032 ] 00:35:01.810 [2024-12-07 10:09:30.423886] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.810 [2024-12-07 10:09:30.464126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.068 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:02.068 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:02.068 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:02.068 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:02.068 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:02.068 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.068 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:02.068 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.068 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:02.068 10:09:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:02.326 nvme0n1 00:35:02.327 10:09:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:02.327 10:09:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.327 10:09:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:02.327 10:09:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.327 10:09:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:02.327 10:09:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:02.585 Running I/O for 2 seconds... 00:35:02.585 [2024-12-07 10:09:31.151098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.151134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.151146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.161658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.161685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.161694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.173823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.173847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.173857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.182588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.182612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.182621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.195182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.195207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.195216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.208322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.208347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.208357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.220994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.221024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.221033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.228776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.228798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.228806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.240712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.240736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.240745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.249574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.249598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.249607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.260966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.260989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.260999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.269342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.269366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.269375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.281259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.281282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.281290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.289901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.289923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.289931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.585 [2024-12-07 10:09:31.301052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.585 [2024-12-07 10:09:31.301074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.585 [2024-12-07 10:09:31.301086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.312275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.312301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.844 [2024-12-07 10:09:31.312310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.320695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.320719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.844 [2024-12-07 10:09:31.320728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.334249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.334272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.844 [2024-12-07 10:09:31.334281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.347017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.347039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.844 [2024-12-07 10:09:31.347048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.354973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.354994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.844 [2024-12-07 10:09:31.355002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.367015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.367037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.844 [2024-12-07 10:09:31.367045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.379685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.379706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.844 [2024-12-07 10:09:31.379715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.390757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.390778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.844 [2024-12-07 10:09:31.390787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.399160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.399181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.844 [2024-12-07 10:09:31.399190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.411244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.411267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.844 [2024-12-07 10:09:31.411276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.420087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.420108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.844 [2024-12-07 10:09:31.420116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.432459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.432481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.844 [2024-12-07 10:09:31.432489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.844 [2024-12-07 10:09:31.445169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.844 [2024-12-07 10:09:31.445190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.445198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.845 [2024-12-07 10:09:31.455675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.845 [2024-12-07 10:09:31.455696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.455705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.845 [2024-12-07 10:09:31.463682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.845 [2024-12-07 10:09:31.463704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.463713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.845 [2024-12-07 10:09:31.474692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.845 [2024-12-07 10:09:31.474713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.474722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.845 [2024-12-07 10:09:31.484409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.845 [2024-12-07 10:09:31.484429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.484441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.845 [2024-12-07 10:09:31.493559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.845 [2024-12-07 10:09:31.493580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.493589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.845 [2024-12-07 10:09:31.504324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.845 [2024-12-07 10:09:31.504345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.504354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.845 [2024-12-07 10:09:31.512483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.845 [2024-12-07 10:09:31.512505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.512513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.845 [2024-12-07 10:09:31.523019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.845 [2024-12-07 10:09:31.523041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.523050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.845 [2024-12-07 10:09:31.531593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.845 [2024-12-07 10:09:31.531614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.531623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.845 [2024-12-07 10:09:31.543842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.845 [2024-12-07 10:09:31.543864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.543872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.845 [2024-12-07 10:09:31.552612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.845 [2024-12-07 10:09:31.552633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.552641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.845 [2024-12-07 10:09:31.564733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:02.845 [2024-12-07 10:09:31.564755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.845 [2024-12-07 10:09:31.564764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.104 [2024-12-07 10:09:31.576821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.104 [2024-12-07 10:09:31.576849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.104 [2024-12-07 10:09:31.576858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.104 [2024-12-07 10:09:31.585122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.104 [2024-12-07 10:09:31.585144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.104 [2024-12-07 10:09:31.585152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.104 [2024-12-07 10:09:31.597120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.104 [2024-12-07 10:09:31.597141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.104 [2024-12-07 10:09:31.597149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.104 [2024-12-07 10:09:31.608354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.104 [2024-12-07 10:09:31.608377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.104 [2024-12-07 10:09:31.608386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.104 [2024-12-07 10:09:31.616463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.104 [2024-12-07 10:09:31.616484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.616493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.627616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.627638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.627646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.637707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.637727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.637736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.646565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.646586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.646594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.656562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.656582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.656591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.665960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.665982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.665990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.675790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.675811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.675819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.684974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.684995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.685004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.694371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.694392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.694401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.702847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.702868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.702877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.715640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.715662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.715671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.726351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.726372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.726381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.735127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.735147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.735156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.746060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.746080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.746092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.754824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.754845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.754853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.767442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.767463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.767471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.775895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.775917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.775925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.787675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.787696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.787704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.796017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.796038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.796046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.806562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.806584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.105 [2024-12-07 10:09:31.806592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.105 [2024-12-07 10:09:31.818311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.105 [2024-12-07 10:09:31.818331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.106 [2024-12-07 10:09:31.818339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.831737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.365 [2024-12-07 10:09:31.831760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.365 [2024-12-07 10:09:31.831770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.840719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.365 [2024-12-07 10:09:31.840745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.365 [2024-12-07 10:09:31.840754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.852842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.365 [2024-12-07 10:09:31.852864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.365 [2024-12-07 10:09:31.852872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.861394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.365 [2024-12-07 10:09:31.861415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.365 [2024-12-07 10:09:31.861423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.874382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.365 [2024-12-07 10:09:31.874403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.365 [2024-12-07 10:09:31.874412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.882476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.365 [2024-12-07 10:09:31.882497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.365 [2024-12-07 10:09:31.882506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.894101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.365 [2024-12-07 10:09:31.894122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.365 [2024-12-07 10:09:31.894131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.906741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.365 [2024-12-07 10:09:31.906762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.365 [2024-12-07 10:09:31.906770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.918612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.365 [2024-12-07 10:09:31.918633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.365 [2024-12-07 10:09:31.918642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.931024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.365 [2024-12-07 10:09:31.931045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.365 [2024-12-07 10:09:31.931057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.939581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.365 [2024-12-07 10:09:31.939602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.365 [2024-12-07 10:09:31.939610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.952526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.365 [2024-12-07 10:09:31.952548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.365 [2024-12-07 10:09:31.952557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.365 [2024-12-07 10:09:31.964930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.366 [2024-12-07 10:09:31.964957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.366 [2024-12-07 10:09:31.964965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.366 [2024-12-07 10:09:31.977264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.366 [2024-12-07 10:09:31.977285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.366 [2024-12-07 10:09:31.977294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.366 [2024-12-07 10:09:31.985326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.366 [2024-12-07 10:09:31.985347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.366 [2024-12-07 10:09:31.985355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.366 [2024-12-07 10:09:31.997484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.366 [2024-12-07 10:09:31.997505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.366 [2024-12-07 10:09:31.997513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.366 [2024-12-07 10:09:32.010128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.366 [2024-12-07 10:09:32.010149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.366 [2024-12-07 10:09:32.010157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.366 [2024-12-07 10:09:32.021494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.366 [2024-12-07 10:09:32.021515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.366 [2024-12-07 10:09:32.021524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.366 [2024-12-07 10:09:32.029969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.366 [2024-12-07 10:09:32.029992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.366 [2024-12-07 10:09:32.030001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.366 [2024-12-07 10:09:32.040430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.366 [2024-12-07 10:09:32.040450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.366 [2024-12-07 10:09:32.040458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.366 [2024-12-07 10:09:32.050599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.366 [2024-12-07 10:09:32.050620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.366 [2024-12-07 10:09:32.050628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.366 [2024-12-07 10:09:32.058487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.366 [2024-12-07 10:09:32.058507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.366 [2024-12-07 10:09:32.058515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.366 [2024-12-07 10:09:32.068680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.366 [2024-12-07 10:09:32.068700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.366 [2024-12-07 10:09:32.068708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.366 [2024-12-07 10:09:32.077900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.366 [2024-12-07 10:09:32.077921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.366 [2024-12-07 10:09:32.077930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.625 [2024-12-07 10:09:32.089003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.625 [2024-12-07 10:09:32.089026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.625 [2024-12-07 10:09:32.089035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.625 [2024-12-07 10:09:32.097623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.625 [2024-12-07 10:09:32.097646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.625 [2024-12-07 10:09:32.097654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.625 [2024-12-07 10:09:32.109446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.625 [2024-12-07 10:09:32.109468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.625 [2024-12-07 10:09:32.109477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.625 [2024-12-07 10:09:32.122035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.625 [2024-12-07 10:09:32.122056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.625 [2024-12-07 10:09:32.122065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.625 [2024-12-07 10:09:32.133603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.625 [2024-12-07 10:09:32.133624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.625 [2024-12-07 10:09:32.133632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.625 24065.00 IOPS, 94.00 MiB/s [2024-12-07T09:09:32.351Z] [2024-12-07 10:09:32.146227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.625 [2024-12-07 10:09:32.146248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.625 [2024-12-07 10:09:32.146257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.155141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.155162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.155171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.168195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.168216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.168225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.180891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.180913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.180921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.192908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.192929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.192937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.201766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.201788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.201797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.214499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.214520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.214532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.225681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.225702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.225711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.236288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.236309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.236318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.244763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.244784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.244792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.256977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.256998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.257006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.269750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.269772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.269780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.278112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.278134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.278142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.290152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.290172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.290181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.301275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.301295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.301303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.313867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.313890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.313898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.322391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.322413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.322421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.333762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.333784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.333792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.626 [2024-12-07 10:09:32.342966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.626 [2024-12-07 10:09:32.342987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.626 [2024-12-07 10:09:32.342996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.354340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.354365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.354374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.362824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.362848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.362857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.375515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.375538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.375547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.384013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.384035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.384043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.395841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.395863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.395875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.404515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.404537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.404546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.415590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.415612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.415621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.427109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.427130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.427138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.436380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.436401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.436409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.445454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.445476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.445484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.455264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.455286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.455294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.466455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.466476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.466484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.475294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.475316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.475324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.485974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.485999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.486008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.495155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.495177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.886 [2024-12-07 10:09:32.495186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.886 [2024-12-07 10:09:32.506280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.886 [2024-12-07 10:09:32.506302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.887 [2024-12-07 10:09:32.506311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.887 [2024-12-07 10:09:32.515892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.887 [2024-12-07 10:09:32.515913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.887 [2024-12-07 10:09:32.515922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.887 [2024-12-07 10:09:32.525027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.887 [2024-12-07 10:09:32.525048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.887 [2024-12-07 10:09:32.525057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.887 [2024-12-07 10:09:32.533911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.887 [2024-12-07 10:09:32.533933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.887 [2024-12-07 10:09:32.533941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.887 [2024-12-07 10:09:32.544762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.887 [2024-12-07 10:09:32.544784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.887 [2024-12-07 10:09:32.544793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.887 [2024-12-07 10:09:32.553792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.887 [2024-12-07 10:09:32.553814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.887 [2024-12-07 10:09:32.553822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.887 [2024-12-07 10:09:32.564556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.887 [2024-12-07 10:09:32.564578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.887 [2024-12-07 10:09:32.564586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.887 [2024-12-07 10:09:32.573069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.887 [2024-12-07 10:09:32.573090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.887 [2024-12-07 10:09:32.573099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.887 [2024-12-07 10:09:32.585990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.887 [2024-12-07 10:09:32.586011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.887 [2024-12-07 10:09:32.586019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.887 [2024-12-07 10:09:32.598859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.887 [2024-12-07 10:09:32.598881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.887 [2024-12-07 10:09:32.598890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.887 [2024-12-07 10:09:32.607410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:03.887 [2024-12-07 10:09:32.607434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.887 [2024-12-07 10:09:32.607442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.619738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.619762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.619771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.631641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.631663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.631672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.644070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.644092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.644100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.652907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.652929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.652938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.663823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.663846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.663858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.674016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.674037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.674046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.682313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.682335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.682343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.694100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.694122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.694131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.706586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.706608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.706617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.714938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.714965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.714974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.727186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.727208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.727217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.737678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.737701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.737709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.746070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.746091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.746099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.757546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.757568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.757577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.769886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.769908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.769917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.777956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.777978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.777986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.789939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.789965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.789974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.801476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.801497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.801505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.810874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.810895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.810904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.819703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.819724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.819732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.831858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.831879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.831887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.843158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.843180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.843191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.851256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.146 [2024-12-07 10:09:32.851277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.146 [2024-12-07 10:09:32.851285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.146 [2024-12-07 10:09:32.861095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.147 [2024-12-07 10:09:32.861116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.147 [2024-12-07 10:09:32.861125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.405 [2024-12-07 10:09:32.870894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.870917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.870926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.881474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.881496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.881504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.890064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.890085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.890094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.900212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.900234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.900242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.910669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.910690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.910699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.919992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.920013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.920022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.929372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.929397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.929406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.938310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.938331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.938339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.949057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.949077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.949086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.957721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.957742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.957750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.967351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.967371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.967379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.978087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.978108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.978117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.986946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.986970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.986978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:32.999707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:32.999728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:32.999737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:33.012276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:33.012297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:33.012305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:33.020975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:33.020996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:33.021004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:33.032484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:33.032505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:33.032513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:33.045508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:33.045528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:33.045536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:33.057682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:33.057704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:33.057712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:33.068791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:33.068811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:33.068820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:33.077403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:33.077424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:33.077432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:33.087579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.406 [2024-12-07 10:09:33.087599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.406 [2024-12-07 10:09:33.087608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.406 [2024-12-07 10:09:33.096917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.407 [2024-12-07 10:09:33.096938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.407 [2024-12-07 10:09:33.096951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.407 [2024-12-07 10:09:33.106405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.407 [2024-12-07 10:09:33.106426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.407 [2024-12-07 10:09:33.106438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.407 [2024-12-07 10:09:33.114432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.407 [2024-12-07 10:09:33.114453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.407 [2024-12-07 10:09:33.114461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.407 [2024-12-07 10:09:33.126524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.407 [2024-12-07 10:09:33.126545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.407 [2024-12-07 10:09:33.126554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.665 [2024-12-07 10:09:33.140109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23bd4d0) 00:35:04.665 [2024-12-07 10:09:33.140131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.665 [2024-12-07 10:09:33.140140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.665 24218.00 IOPS, 94.60 MiB/s 00:35:04.665 Latency(us) 00:35:04.665 [2024-12-07T09:09:33.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.665 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:04.665 nvme0n1 : 2.00 24242.88 94.70 0.00 0.00 5274.40 2564.45 18692.01 00:35:04.665 [2024-12-07T09:09:33.391Z] =================================================================================================================== 00:35:04.665 [2024-12-07T09:09:33.391Z] Total : 24242.88 94.70 0.00 0.00 5274.40 2564.45 18692.01 00:35:04.665 { 00:35:04.665 "results": [ 00:35:04.665 { 00:35:04.665 "job": "nvme0n1", 00:35:04.665 "core_mask": "0x2", 00:35:04.665 "workload": "randread", 00:35:04.665 "status": "finished", 00:35:04.665 "queue_depth": 128, 00:35:04.665 "io_size": 4096, 00:35:04.665 "runtime": 2.0043, 00:35:04.665 "iops": 24242.87781270269, 00:35:04.665 "mibps": 94.69874145586988, 00:35:04.665 "io_failed": 0, 00:35:04.665 "io_timeout": 0, 00:35:04.665 "avg_latency_us": 5274.402454181841, 00:35:04.665 "min_latency_us": 2564.4521739130437, 00:35:04.665 "max_latency_us": 18692.006956521738 00:35:04.665 } 00:35:04.665 ], 00:35:04.665 "core_count": 1 00:35:04.665 } 00:35:04.665 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:04.665 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:04.665 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:04.665 | .driver_specific 00:35:04.665 | .nvme_error 00:35:04.665 | .status_code 00:35:04.665 | .command_transient_transport_error' 00:35:04.665 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:04.665 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 190 > 0 )) 00:35:04.665 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1460032 00:35:04.665 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1460032 ']' 00:35:04.665 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1460032 00:35:04.665 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:04.665 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:04.665 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1460032 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1460032' 00:35:04.924 killing process with pid 1460032 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1460032 00:35:04.924 Received shutdown signal, test time was about 2.000000 seconds 00:35:04.924 00:35:04.924 Latency(us) 00:35:04.924 [2024-12-07T09:09:33.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.924 [2024-12-07T09:09:33.650Z] =================================================================================================================== 00:35:04.924 [2024-12-07T09:09:33.650Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1460032 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1460513 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1460513 /var/tmp/bperf.sock 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1460513 ']' 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:04.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:04.924 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.924 [2024-12-07 10:09:33.640339] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:04.924 [2024-12-07 10:09:33.640387] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460513 ] 00:35:04.924 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:04.924 Zero copy mechanism will not be used. 00:35:05.183 [2024-12-07 10:09:33.693824] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.183 [2024-12-07 10:09:33.735696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.183 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:05.183 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:05.183 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:05.183 10:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:05.439 10:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:05.439 10:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.439 10:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.439 10:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.439 10:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.439 10:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.696 nvme0n1 00:35:05.696 10:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:05.697 10:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.697 10:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.697 10:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.697 10:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:05.697 10:09:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:05.697 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:05.697 Zero copy mechanism will not be used. 00:35:05.697 Running I/O for 2 seconds... 00:35:05.697 [2024-12-07 10:09:34.408795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.697 [2024-12-07 10:09:34.408829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.697 [2024-12-07 10:09:34.408840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.697 [2024-12-07 10:09:34.416490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.697 [2024-12-07 10:09:34.416518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.697 [2024-12-07 10:09:34.416528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.425354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.425380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.425390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.432034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.432059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.432068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.439236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.439260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.439269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.447002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.447026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.447035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.455437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.455463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.455472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.464166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.464192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.464201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.473019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.473044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.473053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.482085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.482110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.482119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.490255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.490278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.490287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.497077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.497100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.497109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.503851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.503874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.503886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.511172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.511196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.511204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.519135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.519160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.519168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.528012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.528035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.528044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.536660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.536683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.536692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.545901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.545924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.545933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.554964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.554987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.554996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.563894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.563917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.563926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.572123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.572147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.572156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.581074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.581102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.581111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.590549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.590574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.590583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.599203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.599228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.599236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.956 [2024-12-07 10:09:34.607528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.956 [2024-12-07 10:09:34.607553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.956 [2024-12-07 10:09:34.607562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.957 [2024-12-07 10:09:34.616810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.957 [2024-12-07 10:09:34.616834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.957 [2024-12-07 10:09:34.616842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.957 [2024-12-07 10:09:34.625523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.957 [2024-12-07 10:09:34.625547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.957 [2024-12-07 10:09:34.625555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.957 [2024-12-07 10:09:34.634825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.957 [2024-12-07 10:09:34.634849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.957 [2024-12-07 10:09:34.634859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.957 [2024-12-07 10:09:34.644413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.957 [2024-12-07 10:09:34.644436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.957 [2024-12-07 10:09:34.644445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.957 [2024-12-07 10:09:34.652938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.957 [2024-12-07 10:09:34.652969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.957 [2024-12-07 10:09:34.652978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.957 [2024-12-07 10:09:34.661938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.957 [2024-12-07 10:09:34.661968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.957 [2024-12-07 10:09:34.661977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.957 [2024-12-07 10:09:34.671811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:05.957 [2024-12-07 10:09:34.671836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.957 [2024-12-07 10:09:34.671844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.216 [2024-12-07 10:09:34.680414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.680439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.680449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.689354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.689378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.689386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.698270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.698294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.698303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.707303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.707327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.707336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.716112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.716136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.716145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.724837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.724859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.724868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.733697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.733720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.733733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.742148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.742172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.742180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.750779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.750802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.750811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.759700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.759724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.759732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.767916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.767939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.767953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.774932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.774963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.774972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.782255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.782278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.782287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.788548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.788570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.788579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.796470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.796493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.796502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.804465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.804495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.804503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.812707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.812730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.812738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.821991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.822014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.822023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.830907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.830931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.830940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.839440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.839464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.217 [2024-12-07 10:09:34.839472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.217 [2024-12-07 10:09:34.848723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.217 [2024-12-07 10:09:34.848746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.218 [2024-12-07 10:09:34.848755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.218 [2024-12-07 10:09:34.857118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.218 [2024-12-07 10:09:34.857141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.218 [2024-12-07 10:09:34.857150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.218 [2024-12-07 10:09:34.866096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.218 [2024-12-07 10:09:34.866120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.218 [2024-12-07 10:09:34.866129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.218 [2024-12-07 10:09:34.874912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.218 [2024-12-07 10:09:34.874936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.218 [2024-12-07 10:09:34.874945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.218 [2024-12-07 10:09:34.884072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.218 [2024-12-07 10:09:34.884096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.218 [2024-12-07 10:09:34.884104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.218 [2024-12-07 10:09:34.892802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.218 [2024-12-07 10:09:34.892826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.218 [2024-12-07 10:09:34.892835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.218 [2024-12-07 10:09:34.902498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.218 [2024-12-07 10:09:34.902521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.218 [2024-12-07 10:09:34.902531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.218 [2024-12-07 10:09:34.911265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.218 [2024-12-07 10:09:34.911288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.218 [2024-12-07 10:09:34.911297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.218 [2024-12-07 10:09:34.918708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.218 [2024-12-07 10:09:34.918731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.218 [2024-12-07 10:09:34.918740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.218 [2024-12-07 10:09:34.928707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.218 [2024-12-07 10:09:34.928729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.218 [2024-12-07 10:09:34.928738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.218 [2024-12-07 10:09:34.935968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.218 [2024-12-07 10:09:34.936002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.218 [2024-12-07 10:09:34.936011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.477 [2024-12-07 10:09:34.943171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.477 [2024-12-07 10:09:34.943195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.477 [2024-12-07 10:09:34.943204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.477 [2024-12-07 10:09:34.951045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.477 [2024-12-07 10:09:34.951070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.477 [2024-12-07 10:09:34.951083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.477 [2024-12-07 10:09:34.959452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.477 [2024-12-07 10:09:34.959476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.477 [2024-12-07 10:09:34.959484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.477 [2024-12-07 10:09:34.968119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.477 [2024-12-07 10:09:34.968143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.477 [2024-12-07 10:09:34.968152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.477 [2024-12-07 10:09:34.976428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.477 [2024-12-07 10:09:34.976451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.477 [2024-12-07 10:09:34.976459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.477 [2024-12-07 10:09:34.985370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.477 [2024-12-07 10:09:34.985393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.477 [2024-12-07 10:09:34.985401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.477 [2024-12-07 10:09:34.994508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.477 [2024-12-07 10:09:34.994531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.477 [2024-12-07 10:09:34.994540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.477 [2024-12-07 10:09:35.003069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.477 [2024-12-07 10:09:35.003092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.477 [2024-12-07 10:09:35.003101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.477 [2024-12-07 10:09:35.010901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.477 [2024-12-07 10:09:35.010924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.477 [2024-12-07 10:09:35.010932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.477 [2024-12-07 10:09:35.018203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.477 [2024-12-07 10:09:35.018226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.477 [2024-12-07 10:09:35.018234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.477 [2024-12-07 10:09:35.025036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.025057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.025065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.031535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.031557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.031565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.038222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.038244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.038252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.045184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.045206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.045214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.052393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.052415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.052423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.059357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.059380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.059388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.066324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.066346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.066354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.072687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.072710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.072718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.079400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.079423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.079435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.086134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.086158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.086166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.093268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.093291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.093300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.099614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.099636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.099644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.106512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.106535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.106543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.112906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.112928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.112937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.119377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.119400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.119408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.126743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.126765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.126774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.133911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.133933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.133941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.140386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.140412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.140421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.146956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.146978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.146986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.153926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.153956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.153965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.478 [2024-12-07 10:09:35.161162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.478 [2024-12-07 10:09:35.161185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.478 [2024-12-07 10:09:35.161193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.479 [2024-12-07 10:09:35.167764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.479 [2024-12-07 10:09:35.167787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.479 [2024-12-07 10:09:35.167795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.479 [2024-12-07 10:09:35.173967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.479 [2024-12-07 10:09:35.173990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.479 [2024-12-07 10:09:35.173998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.479 [2024-12-07 10:09:35.182562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.479 [2024-12-07 10:09:35.182586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.479 [2024-12-07 10:09:35.182594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.479 [2024-12-07 10:09:35.189121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.479 [2024-12-07 10:09:35.189144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.479 [2024-12-07 10:09:35.189152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.479 [2024-12-07 10:09:35.197496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.479 [2024-12-07 10:09:35.197520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.479 [2024-12-07 10:09:35.197529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.737 [2024-12-07 10:09:35.206368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.737 [2024-12-07 10:09:35.206393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.737 [2024-12-07 10:09:35.206402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.737 [2024-12-07 10:09:35.214144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.737 [2024-12-07 10:09:35.214181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.737 [2024-12-07 10:09:35.214190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.737 [2024-12-07 10:09:35.221472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.737 [2024-12-07 10:09:35.221495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.737 [2024-12-07 10:09:35.221503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.737 [2024-12-07 10:09:35.229427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.737 [2024-12-07 10:09:35.229451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.737 [2024-12-07 10:09:35.229460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.737 [2024-12-07 10:09:35.237661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.737 [2024-12-07 10:09:35.237685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.737 [2024-12-07 10:09:35.237694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.737 [2024-12-07 10:09:35.245570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.737 [2024-12-07 10:09:35.245595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.737 [2024-12-07 10:09:35.245603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.737 [2024-12-07 10:09:35.253694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.737 [2024-12-07 10:09:35.253718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.737 [2024-12-07 10:09:35.253727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.737 [2024-12-07 10:09:35.261196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.737 [2024-12-07 10:09:35.261220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.737 [2024-12-07 10:09:35.261229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.737 [2024-12-07 10:09:35.268449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.737 [2024-12-07 10:09:35.268473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.737 [2024-12-07 10:09:35.268486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.737 [2024-12-07 10:09:35.275208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.737 [2024-12-07 10:09:35.275231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.737 [2024-12-07 10:09:35.275239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.737 [2024-12-07 10:09:35.281460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.737 [2024-12-07 10:09:35.281483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.737 [2024-12-07 10:09:35.281491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.737 [2024-12-07 10:09:35.287547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.737 [2024-12-07 10:09:35.287569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.737 [2024-12-07 10:09:35.287577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.293640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.293662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.293670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.299810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.299834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.299842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.305792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.305816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.305824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.311819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.311842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.311850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.317743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.317766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.317775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.323768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.323795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.323803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.329682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.329704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.329712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.335405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.335427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.335435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.341233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.341256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.341265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.347069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.347091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.347100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.352873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.352896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.352904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.359600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.359623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.359632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.367830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.367854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.367863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.375673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.375696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.375705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.384156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.384179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.384188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.393001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.393024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.393033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.402062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.402086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.402095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.738 3980.00 IOPS, 497.50 MiB/s [2024-12-07T09:09:35.464Z] [2024-12-07 10:09:35.411017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.411040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.411049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.418935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.418964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.418973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.423620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.423642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.423650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.431689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.431713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.431722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.441787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.441810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.441820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.450172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.450194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.450207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.738 [2024-12-07 10:09:35.458358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.738 [2024-12-07 10:09:35.458381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.738 [2024-12-07 10:09:35.458390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.465552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.465578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.465587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.472898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.472920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.472929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.479809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.479832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.479841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.487485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.487508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.487516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.495304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.495327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.495336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.502453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.502476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.502485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.509920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.509943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.509958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.518000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.518028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.518037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.526789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.526812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.526821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.535838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.535862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.535873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.544934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.544965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.544974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.552595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.552618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.552627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.559936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.559964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.559973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.566734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.566757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.566765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.997 [2024-12-07 10:09:35.572345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.997 [2024-12-07 10:09:35.572369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.997 [2024-12-07 10:09:35.572378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.578537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.578561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.578569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.585024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.585046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.585055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.591511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.591534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.591542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.597944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.597972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.597982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.604555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.604580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.604589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.610209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.610233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.610241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.616813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.616837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.616846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.623673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.623696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.623704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.630003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.630027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.630048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.637035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.637058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.637072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.644749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.644774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.644784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.653760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.653786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.653798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.662605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.662631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.662641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.670079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.670103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.670112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.677267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.677290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.677298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.684918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.684942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.684957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.693971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.693996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.694005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.702172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.702195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.702204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.710461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.710485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.710493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.998 [2024-12-07 10:09:35.718944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:06.998 [2024-12-07 10:09:35.718976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.998 [2024-12-07 10:09:35.718986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.256 [2024-12-07 10:09:35.727190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.256 [2024-12-07 10:09:35.727214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.256 [2024-12-07 10:09:35.727223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.256 [2024-12-07 10:09:35.734648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.256 [2024-12-07 10:09:35.734671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.256 [2024-12-07 10:09:35.734680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.256 [2024-12-07 10:09:35.741591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.256 [2024-12-07 10:09:35.741613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.256 [2024-12-07 10:09:35.741622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.256 [2024-12-07 10:09:35.748083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.256 [2024-12-07 10:09:35.748107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.256 [2024-12-07 10:09:35.748115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.256 [2024-12-07 10:09:35.756106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.256 [2024-12-07 10:09:35.756129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.256 [2024-12-07 10:09:35.756138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.256 [2024-12-07 10:09:35.764474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.256 [2024-12-07 10:09:35.764497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.256 [2024-12-07 10:09:35.764506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.256 [2024-12-07 10:09:35.772218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.256 [2024-12-07 10:09:35.772242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.256 [2024-12-07 10:09:35.772258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.256 [2024-12-07 10:09:35.781208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.256 [2024-12-07 10:09:35.781232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.256 [2024-12-07 10:09:35.781240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.789965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.789988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.789997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.798905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.798929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.798937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.808317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.808340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.808349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.817694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.817717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.817725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.826875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.826899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.826907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.835727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.835749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.835758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.844333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.844357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.844365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.852940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.852973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.852982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.860842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.860865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.860875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.868099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.868121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.868130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.876228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.876251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.876260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.885169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.885191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.885200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.893476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.893499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.893508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.902410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.902432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.902441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.911499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.911521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.911530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.919900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.919922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.919931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.928641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.928665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.928674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.937568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.937591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.937600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.945326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.945349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.945359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.952961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.952984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.952993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.961142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.961165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.961173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.968413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.968435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.968444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.257 [2024-12-07 10:09:35.975201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.257 [2024-12-07 10:09:35.975223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.257 [2024-12-07 10:09:35.975231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.516 [2024-12-07 10:09:35.982358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.516 [2024-12-07 10:09:35.982383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.516 [2024-12-07 10:09:35.982392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.516 [2024-12-07 10:09:35.989255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.516 [2024-12-07 10:09:35.989278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.516 [2024-12-07 10:09:35.989291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.516 [2024-12-07 10:09:35.996042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.516 [2024-12-07 10:09:35.996065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.516 [2024-12-07 10:09:35.996074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.516 [2024-12-07 10:09:36.002569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.516 [2024-12-07 10:09:36.002591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.516 [2024-12-07 10:09:36.002600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.516 [2024-12-07 10:09:36.009505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.516 [2024-12-07 10:09:36.009528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.516 [2024-12-07 10:09:36.009536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.516 [2024-12-07 10:09:36.016031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.516 [2024-12-07 10:09:36.016053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.516 [2024-12-07 10:09:36.016062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.516 [2024-12-07 10:09:36.022400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.516 [2024-12-07 10:09:36.022421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.516 [2024-12-07 10:09:36.022429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.516 [2024-12-07 10:09:36.028910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.516 [2024-12-07 10:09:36.028933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.516 [2024-12-07 10:09:36.028941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.516 [2024-12-07 10:09:36.035531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.516 [2024-12-07 10:09:36.035552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.516 [2024-12-07 10:09:36.035561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.516 [2024-12-07 10:09:36.041159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.516 [2024-12-07 10:09:36.041180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.516 [2024-12-07 10:09:36.041188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.516 [2024-12-07 10:09:36.044750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.516 [2024-12-07 10:09:36.044777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.516 [2024-12-07 10:09:36.044785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.516 [2024-12-07 10:09:36.051313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.051335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.051343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.057966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.057988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.057996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.064456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.064479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.064487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.070864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.070886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.070894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.077668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.077689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.077697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.084028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.084050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.084058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.090919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.090942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.090956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.097760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.097782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.097790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.104775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.104797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.104805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.110708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.110730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.110738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.117057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.117079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.117087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.123232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.123253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.123262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.129275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.129297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.129306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.135252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.135274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.135283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.142360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.142383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.142391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.148820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.148842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.148851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.155205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.155228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.155240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.161820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.161843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.161852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.168693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.168715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.168723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.176690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.176712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.176721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.184160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.184183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.184192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.192271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.192294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.192302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.200138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.200161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.200170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.207466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.207488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.207497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.213486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.213508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.213516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.220088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.220115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.220123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.226646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.517 [2024-12-07 10:09:36.226669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.517 [2024-12-07 10:09:36.226677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.517 [2024-12-07 10:09:36.233442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.518 [2024-12-07 10:09:36.233464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.518 [2024-12-07 10:09:36.233473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.776 [2024-12-07 10:09:36.239244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.776 [2024-12-07 10:09:36.239270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.239283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.245502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.245525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.245534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.251447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.251470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.251479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.257972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.257993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.258002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.264000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.264021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.264030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.270164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.270185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.270194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.273706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.273728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.273736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.280290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.280311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.280319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.286483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.286506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.286514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.292664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.292685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.292693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.298516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.298537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.298545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.304499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.304521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.304529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.310505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.310527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.310536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.316487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.316509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.316516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.322421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.322443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.322455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.328359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.328382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.328391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.334860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.334883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.334891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.340686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.340708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.340716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.346280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.346302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.346311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.351900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.351921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.351929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.357418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.357439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.777 [2024-12-07 10:09:36.357447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.777 [2024-12-07 10:09:36.363035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.777 [2024-12-07 10:09:36.363056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.778 [2024-12-07 10:09:36.363064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.778 [2024-12-07 10:09:36.368471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.778 [2024-12-07 10:09:36.368493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.778 [2024-12-07 10:09:36.368501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.778 [2024-12-07 10:09:36.373895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.778 [2024-12-07 10:09:36.373917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.778 [2024-12-07 10:09:36.373925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.778 [2024-12-07 10:09:36.379500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.778 [2024-12-07 10:09:36.379523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.778 [2024-12-07 10:09:36.379531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.778 [2024-12-07 10:09:36.385074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.778 [2024-12-07 10:09:36.385095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.778 [2024-12-07 10:09:36.385103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.778 [2024-12-07 10:09:36.390656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.778 [2024-12-07 10:09:36.390678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.778 [2024-12-07 10:09:36.390686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.778 [2024-12-07 10:09:36.396172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.778 [2024-12-07 10:09:36.396194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.778 [2024-12-07 10:09:36.396202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.778 [2024-12-07 10:09:36.401536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.778 [2024-12-07 10:09:36.401558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.778 [2024-12-07 10:09:36.401566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.778 4179.00 IOPS, 522.38 MiB/s [2024-12-07T09:09:36.504Z] [2024-12-07 10:09:36.407874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea4590) 00:35:07.778 [2024-12-07 10:09:36.407896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.778 [2024-12-07 10:09:36.407904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.778 00:35:07.778 Latency(us) 00:35:07.778 [2024-12-07T09:09:36.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.778 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:07.778 nvme0n1 : 2.00 4179.15 522.39 0.00 0.00 3825.35 961.67 10257.81 00:35:07.778 [2024-12-07T09:09:36.504Z] =================================================================================================================== 00:35:07.778 [2024-12-07T09:09:36.504Z] Total : 4179.15 522.39 0.00 0.00 3825.35 961.67 10257.81 00:35:07.778 { 00:35:07.778 "results": [ 00:35:07.778 { 00:35:07.778 "job": "nvme0n1", 00:35:07.778 "core_mask": "0x2", 00:35:07.778 "workload": "randread", 00:35:07.778 "status": "finished", 00:35:07.778 "queue_depth": 16, 00:35:07.778 "io_size": 131072, 00:35:07.778 "runtime": 2.003756, 00:35:07.778 "iops": 4179.151553382747, 00:35:07.778 "mibps": 522.3939441728434, 00:35:07.778 "io_failed": 0, 00:35:07.778 "io_timeout": 0, 00:35:07.778 "avg_latency_us": 3825.3524860593343, 00:35:07.778 "min_latency_us": 961.6695652173913, 00:35:07.778 "max_latency_us": 10257.808695652175 00:35:07.778 } 00:35:07.778 ], 00:35:07.778 "core_count": 1 00:35:07.778 } 00:35:07.778 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:07.778 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:07.778 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:07.778 | .driver_specific 00:35:07.778 | .nvme_error 00:35:07.778 | .status_code 00:35:07.778 | .command_transient_transport_error' 00:35:07.778 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:08.037 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 270 > 0 )) 00:35:08.037 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1460513 00:35:08.037 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1460513 ']' 00:35:08.037 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1460513 00:35:08.037 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:08.037 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:08.037 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1460513 00:35:08.037 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:08.037 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:08.037 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1460513' 00:35:08.037 killing process with pid 1460513 00:35:08.037 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1460513 00:35:08.037 Received shutdown signal, test time was about 2.000000 seconds 00:35:08.037 00:35:08.037 Latency(us) 00:35:08.037 [2024-12-07T09:09:36.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.037 [2024-12-07T09:09:36.763Z] =================================================================================================================== 00:35:08.037 [2024-12-07T09:09:36.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:08.037 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1460513 00:35:08.296 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:08.296 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:08.296 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:08.296 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:08.296 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:08.296 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1461071 00:35:08.296 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1461071 /var/tmp/bperf.sock 00:35:08.296 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:08.296 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1461071 ']' 00:35:08.297 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.297 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:08.297 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.297 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:08.297 10:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.297 [2024-12-07 10:09:36.901376] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:08.297 [2024-12-07 10:09:36.901424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461071 ] 00:35:08.297 [2024-12-07 10:09:36.955866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.297 [2024-12-07 10:09:36.998298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.556 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:08.556 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:08.556 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.556 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.556 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:08.556 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.556 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.556 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.556 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.556 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.122 nvme0n1 00:35:09.122 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:09.122 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.122 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.122 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.122 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:09.122 10:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.122 Running I/O for 2 seconds... 00:35:09.122 [2024-12-07 10:09:37.807464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ed920 00:35:09.122 [2024-12-07 10:09:37.808323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.122 [2024-12-07 10:09:37.808353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:09.122 [2024-12-07 10:09:37.816283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198edd58 00:35:09.122 [2024-12-07 10:09:37.816961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.123 [2024-12-07 10:09:37.817000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:09.123 [2024-12-07 10:09:37.825245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fef90 00:35:09.123 [2024-12-07 10:09:37.825931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.123 [2024-12-07 10:09:37.825955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:09.123 [2024-12-07 10:09:37.835162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e8d30 00:35:09.123 [2024-12-07 10:09:37.835968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.123 [2024-12-07 10:09:37.835989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:09.123 [2024-12-07 10:09:37.845183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fd208 00:35:09.381 [2024-12-07 10:09:37.846173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.381 [2024-12-07 10:09:37.846199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:09.381 [2024-12-07 10:09:37.855270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fcdd0 00:35:09.381 [2024-12-07 10:09:37.856266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.856287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.865178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fcdd0 00:35:09.382 [2024-12-07 10:09:37.866170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.866190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.873860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198df988 00:35:09.382 [2024-12-07 10:09:37.874857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.874876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.883712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198eea00 00:35:09.382 [2024-12-07 10:09:37.884877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.884897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.893463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f2948 00:35:09.382 [2024-12-07 10:09:37.894772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.894795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.903271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fb480 00:35:09.382 [2024-12-07 10:09:37.904633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.904653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.913047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e5220 00:35:09.382 [2024-12-07 10:09:37.914523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.914542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.921162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198eaef0 00:35:09.382 [2024-12-07 10:09:37.922135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.922156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.930407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ec840 00:35:09.382 [2024-12-07 10:09:37.931379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.931398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.939834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f2948 00:35:09.382 [2024-12-07 10:09:37.940795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.940814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.948347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198df118 00:35:09.382 [2024-12-07 10:09:37.949292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.949312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.959449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198df118 00:35:09.382 [2024-12-07 10:09:37.960879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.960898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.966005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f81e0 00:35:09.382 [2024-12-07 10:09:37.966596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.966615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.975399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fef90 00:35:09.382 [2024-12-07 10:09:37.975979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.975999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.984790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e27f0 00:35:09.382 [2024-12-07 10:09:37.985355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.985375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:37.993961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e27f0 00:35:09.382 [2024-12-07 10:09:37.994525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:37.994544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:38.005035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fa3a0 00:35:09.382 [2024-12-07 10:09:38.006093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:38.006113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:38.013723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fa3a0 00:35:09.382 [2024-12-07 10:09:38.014778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:38.014797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:38.023744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fa3a0 00:35:09.382 [2024-12-07 10:09:38.024789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:38.024808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:38.033036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f9b30 00:35:09.382 [2024-12-07 10:09:38.034071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:38.034091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:09.382 [2024-12-07 10:09:38.042458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fac10 00:35:09.382 [2024-12-07 10:09:38.043486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.382 [2024-12-07 10:09:38.043505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:09.383 [2024-12-07 10:09:38.050969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f7100 00:35:09.383 [2024-12-07 10:09:38.051991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.383 [2024-12-07 10:09:38.052010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:09.383 [2024-12-07 10:09:38.062063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f7100 00:35:09.383 [2024-12-07 10:09:38.063558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.383 [2024-12-07 10:09:38.063577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:09.383 [2024-12-07 10:09:38.070341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f7970 00:35:09.383 [2024-12-07 10:09:38.071374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.383 [2024-12-07 10:09:38.071393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:09.383 [2024-12-07 10:09:38.079057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e6738 00:35:09.383 [2024-12-07 10:09:38.080055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.383 [2024-12-07 10:09:38.080075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:09.383 [2024-12-07 10:09:38.088939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e6738 00:35:09.383 [2024-12-07 10:09:38.089942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.383 [2024-12-07 10:09:38.089964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:09.383 [2024-12-07 10:09:38.098350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fbcf0 00:35:09.383 [2024-12-07 10:09:38.099349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.383 [2024-12-07 10:09:38.099369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:09.642 [2024-12-07 10:09:38.107875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fbcf0 00:35:09.642 [2024-12-07 10:09:38.108905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.642 [2024-12-07 10:09:38.108927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:09.642 [2024-12-07 10:09:38.117241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e5658 00:35:09.642 [2024-12-07 10:09:38.118236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.642 [2024-12-07 10:09:38.118257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:09.642 [2024-12-07 10:09:38.126609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ebfd0 00:35:09.642 [2024-12-07 10:09:38.127622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.642 [2024-12-07 10:09:38.127642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:09.642 [2024-12-07 10:09:38.135890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ebfd0 00:35:09.643 [2024-12-07 10:09:38.136875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.136898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.145172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f4298 00:35:09.643 [2024-12-07 10:09:38.146149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.146169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.154552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f6458 00:35:09.643 [2024-12-07 10:09:38.155521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.155540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.163176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198dece0 00:35:09.643 [2024-12-07 10:09:38.164135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.164155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.172460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ff3c8 00:35:09.643 [2024-12-07 10:09:38.173450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.173469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.183901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ff3c8 00:35:09.643 [2024-12-07 10:09:38.185355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.185374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.191977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fd208 00:35:09.643 [2024-12-07 10:09:38.192918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.192937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.201211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e1b48 00:35:09.643 [2024-12-07 10:09:38.202148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.202167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.210624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198dfdc0 00:35:09.643 [2024-12-07 10:09:38.211553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.211573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.219788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198dfdc0 00:35:09.643 [2024-12-07 10:09:38.220722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.220742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.229046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fdeb0 00:35:09.643 [2024-12-07 10:09:38.229966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.230002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.238511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e3d08 00:35:09.643 [2024-12-07 10:09:38.239428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.239447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.247661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e3d08 00:35:09.643 [2024-12-07 10:09:38.248573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.248593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.256909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e2c28 00:35:09.643 [2024-12-07 10:09:38.257818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.257838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.266288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e4578 00:35:09.643 [2024-12-07 10:09:38.267194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.267215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.274802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ea680 00:35:09.643 [2024-12-07 10:09:38.275720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.275739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.285960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ea680 00:35:09.643 [2024-12-07 10:09:38.287283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.287302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.293926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ef270 00:35:09.643 [2024-12-07 10:09:38.294808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.294828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.303168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f0ff8 00:35:09.643 [2024-12-07 10:09:38.304045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.304065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.312557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e7c50 00:35:09.643 [2024-12-07 10:09:38.313427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.313448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.321369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f46d0 00:35:09.643 [2024-12-07 10:09:38.322226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.322245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.331245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f46d0 00:35:09.643 [2024-12-07 10:09:38.332136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.643 [2024-12-07 10:09:38.332155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:09.643 [2024-12-07 10:09:38.340765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ee190 00:35:09.644 [2024-12-07 10:09:38.341631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.644 [2024-12-07 10:09:38.341651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:09.644 [2024-12-07 10:09:38.349313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f20d8 00:35:09.644 [2024-12-07 10:09:38.350180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.644 [2024-12-07 10:09:38.350200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:09.644 [2024-12-07 10:09:38.359274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f20d8 00:35:09.644 [2024-12-07 10:09:38.360120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.644 [2024-12-07 10:09:38.360140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:09.902 [2024-12-07 10:09:38.368804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ed920 00:35:09.902 [2024-12-07 10:09:38.369673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.902 [2024-12-07 10:09:38.369695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:09.902 [2024-12-07 10:09:38.378263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f57b0 00:35:09.902 [2024-12-07 10:09:38.379091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.902 [2024-12-07 10:09:38.379118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:09.902 [2024-12-07 10:09:38.387460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f57b0 00:35:09.902 [2024-12-07 10:09:38.388290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.902 [2024-12-07 10:09:38.388310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:09.902 [2024-12-07 10:09:38.396116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e95a0 00:35:09.902 [2024-12-07 10:09:38.396929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.902 [2024-12-07 10:09:38.396953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:09.902 [2024-12-07 10:09:38.406000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e95a0 00:35:09.902 [2024-12-07 10:09:38.406821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.902 [2024-12-07 10:09:38.406842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.415405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f31b8 00:35:09.903 [2024-12-07 10:09:38.416220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.416240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.423937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e9e10 00:35:09.903 [2024-12-07 10:09:38.424745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.424764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.433830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e9e10 00:35:09.903 [2024-12-07 10:09:38.434641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.434661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.443265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f31b8 00:35:09.903 [2024-12-07 10:09:38.444067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.444087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.452414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f31b8 00:35:09.903 [2024-12-07 10:09:38.453217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.453237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.461677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ed0b0 00:35:09.903 [2024-12-07 10:09:38.462472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.462495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.471012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ed0b0 00:35:09.903 [2024-12-07 10:09:38.471813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.471832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.480351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f2948 00:35:09.903 [2024-12-07 10:09:38.481137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.481156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.489744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e8d30 00:35:09.903 [2024-12-07 10:09:38.490520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.490538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.498273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f4f40 00:35:09.903 [2024-12-07 10:09:38.499036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.499056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.509342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f4f40 00:35:09.903 [2024-12-07 10:09:38.510595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.510615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.517404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ee190 00:35:09.903 [2024-12-07 10:09:38.518165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.518184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.526059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f1868 00:35:09.903 [2024-12-07 10:09:38.526803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.526822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.536017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f1868 00:35:09.903 [2024-12-07 10:09:38.536762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.536781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.545425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198eea00 00:35:09.903 [2024-12-07 10:09:38.546174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.546193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.554794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198eaef0 00:35:09.903 [2024-12-07 10:09:38.555536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.555556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.563346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e73e0 00:35:09.903 [2024-12-07 10:09:38.564076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.564096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.573337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f1ca0 00:35:09.903 [2024-12-07 10:09:38.574187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.574207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.584383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f1ca0 00:35:09.903 [2024-12-07 10:09:38.585712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.585731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.592502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f4298 00:35:09.903 [2024-12-07 10:09:38.593350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.593369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.601722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f5378 00:35:09.903 [2024-12-07 10:09:38.602754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.602774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:09.903 [2024-12-07 10:09:38.611289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198dfdc0 00:35:09.903 [2024-12-07 10:09:38.612123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.903 [2024-12-07 10:09:38.612143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:09.904 [2024-12-07 10:09:38.619826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f2d80 00:35:09.904 [2024-12-07 10:09:38.620657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.904 [2024-12-07 10:09:38.620675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:10.161 [2024-12-07 10:09:38.629786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e3498 00:35:10.161 [2024-12-07 10:09:38.630763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.161 [2024-12-07 10:09:38.630785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:10.161 [2024-12-07 10:09:38.641019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e3498 00:35:10.161 [2024-12-07 10:09:38.642441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.161 [2024-12-07 10:09:38.642461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:10.161 [2024-12-07 10:09:38.649083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e88f8 00:35:10.161 [2024-12-07 10:09:38.650015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.161 [2024-12-07 10:09:38.650035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:10.161 [2024-12-07 10:09:38.658288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f8618 00:35:10.161 [2024-12-07 10:09:38.659216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.161 [2024-12-07 10:09:38.659237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:10.161 [2024-12-07 10:09:38.667679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fb048 00:35:10.161 [2024-12-07 10:09:38.668596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.161 [2024-12-07 10:09:38.668616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:10.161 [2024-12-07 10:09:38.676287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e3d08 00:35:10.161 [2024-12-07 10:09:38.677233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.161 [2024-12-07 10:09:38.677253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:10.161 [2024-12-07 10:09:38.687396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e3d08 00:35:10.161 [2024-12-07 10:09:38.688785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.161 [2024-12-07 10:09:38.688804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:10.161 [2024-12-07 10:09:38.695561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e99d8 00:35:10.162 [2024-12-07 10:09:38.696466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.696486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.704835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f96f8 00:35:10.162 [2024-12-07 10:09:38.705734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.705758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.714232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fc128 00:35:10.162 [2024-12-07 10:09:38.715122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.715141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.722750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e5220 00:35:10.162 [2024-12-07 10:09:38.723631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.723649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.733863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e5220 00:35:10.162 [2024-12-07 10:09:38.735415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.735443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.743829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ecc78 00:35:10.162 [2024-12-07 10:09:38.745318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.745338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.751896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f2510 00:35:10.162 [2024-12-07 10:09:38.752895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.752914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.761130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ed0b0 00:35:10.162 [2024-12-07 10:09:38.762117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.762136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.770509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ff3c8 00:35:10.162 [2024-12-07 10:09:38.771491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.771510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.779048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e0630 00:35:10.162 [2024-12-07 10:09:38.780023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.780042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.790128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e0630 00:35:10.162 [2024-12-07 10:09:38.791585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.791604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:10.162 27146.00 IOPS, 106.04 MiB/s [2024-12-07T09:09:38.888Z] [2024-12-07 10:09:38.798160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f0bc0 00:35:10.162 [2024-12-07 10:09:38.799124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.799144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.808585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e27f0 00:35:10.162 [2024-12-07 10:09:38.810035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.810055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.816666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f6458 00:35:10.162 [2024-12-07 10:09:38.817612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.817632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.826174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e1710 00:35:10.162 [2024-12-07 10:09:38.827120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.827140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.835602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e38d0 00:35:10.162 [2024-12-07 10:09:38.836535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.836554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.846061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e38d0 00:35:10.162 [2024-12-07 10:09:38.847475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.847494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.852601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fbcf0 00:35:10.162 [2024-12-07 10:09:38.853166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.853185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.863826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198eee38 00:35:10.162 [2024-12-07 10:09:38.864877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.864901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.873069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198eee38 00:35:10.162 [2024-12-07 10:09:38.874109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.162 [2024-12-07 10:09:38.874129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.162 [2024-12-07 10:09:38.881785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f31b8 00:35:10.162 [2024-12-07 10:09:38.882876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.163 [2024-12-07 10:09:38.882896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:38.893179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f31b8 00:35:10.421 [2024-12-07 10:09:38.894696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:38.894717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:38.901200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f6cc8 00:35:10.421 [2024-12-07 10:09:38.902231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:38.902251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:38.911628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fac10 00:35:10.421 [2024-12-07 10:09:38.913135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:38.913155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:38.919707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198eaab8 00:35:10.421 [2024-12-07 10:09:38.920719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:38.920738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:38.928941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f20d8 00:35:10.421 [2024-12-07 10:09:38.929952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:38.929972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:38.938381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ff3c8 00:35:10.421 [2024-12-07 10:09:38.939376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:38.939395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:38.946903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e84c0 00:35:10.421 [2024-12-07 10:09:38.947902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:38.947924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:38.956618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e4de8 00:35:10.421 [2024-12-07 10:09:38.957740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:38.957759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:38.966376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f57b0 00:35:10.421 [2024-12-07 10:09:38.967620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:38.967641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:38.974423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f0bc0 00:35:10.421 [2024-12-07 10:09:38.975170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:38.975189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:38.983068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f4298 00:35:10.421 [2024-12-07 10:09:38.983800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:38.983819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:38.994159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f4298 00:35:10.421 [2024-12-07 10:09:38.995386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:38.995405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:39.003854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198de8a8 00:35:10.421 [2024-12-07 10:09:39.005209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:39.005228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:39.013585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f1868 00:35:10.421 [2024-12-07 10:09:39.015049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:39.015068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:39.021651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e5220 00:35:10.421 [2024-12-07 10:09:39.022618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:39.022638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:39.030861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198eaef0 00:35:10.421 [2024-12-07 10:09:39.031937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.421 [2024-12-07 10:09:39.031959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:10.421 [2024-12-07 10:09:39.040408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f4298 00:35:10.421 [2024-12-07 10:09:39.041373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.422 [2024-12-07 10:09:39.041392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:10.422 [2024-12-07 10:09:39.048921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e3498 00:35:10.422 [2024-12-07 10:09:39.049880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.422 [2024-12-07 10:09:39.049899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:10.422 [2024-12-07 10:09:39.059971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e3498 00:35:10.422 [2024-12-07 10:09:39.061399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.422 [2024-12-07 10:09:39.061418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.422 [2024-12-07 10:09:39.068063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ddc00 00:35:10.422 [2024-12-07 10:09:39.069004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.422 [2024-12-07 10:09:39.069024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:10.422 [2024-12-07 10:09:39.077596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e1b48 00:35:10.422 [2024-12-07 10:09:39.078535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.422 [2024-12-07 10:09:39.078554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:10.422 [2024-12-07 10:09:39.086958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fd208 00:35:10.422 [2024-12-07 10:09:39.087878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.422 [2024-12-07 10:09:39.087897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:10.422 [2024-12-07 10:09:39.095501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e3d08 00:35:10.422 [2024-12-07 10:09:39.096424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.422 [2024-12-07 10:09:39.096444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:10.422 [2024-12-07 10:09:39.106536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e3d08 00:35:10.422 [2024-12-07 10:09:39.107929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.422 [2024-12-07 10:09:39.107952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:10.422 [2024-12-07 10:09:39.114591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f2510 00:35:10.422 [2024-12-07 10:09:39.115496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.422 [2024-12-07 10:09:39.115515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:10.422 [2024-12-07 10:09:39.123862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e0a68 00:35:10.422 [2024-12-07 10:09:39.124770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.422 [2024-12-07 10:09:39.124790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:10.422 [2024-12-07 10:09:39.132519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f7538 00:35:10.422 [2024-12-07 10:09:39.133476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.422 [2024-12-07 10:09:39.133495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.143881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f7538 00:35:10.682 [2024-12-07 10:09:39.145306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.145328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.153764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fc998 00:35:10.682 [2024-12-07 10:09:39.155269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.155290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.161801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198df550 00:35:10.682 [2024-12-07 10:09:39.162815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.162835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.171106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e7818 00:35:10.682 [2024-12-07 10:09:39.172109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.172128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.180476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f0788 00:35:10.682 [2024-12-07 10:09:39.181488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.181508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.190857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f0788 00:35:10.682 [2024-12-07 10:09:39.192338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.192361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.198972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e4140 00:35:10.682 [2024-12-07 10:09:39.199984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.200004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.208300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e4de8 00:35:10.682 [2024-12-07 10:09:39.209282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.209301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.217659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e6738 00:35:10.682 [2024-12-07 10:09:39.218632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.218651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.226194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ecc78 00:35:10.682 [2024-12-07 10:09:39.227168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.227187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.236094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ecc78 00:35:10.682 [2024-12-07 10:09:39.237070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.237090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.244808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f92c0 00:35:10.682 [2024-12-07 10:09:39.245804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.245823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.256072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f92c0 00:35:10.682 [2024-12-07 10:09:39.257506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.257525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.264103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fe2e8 00:35:10.682 [2024-12-07 10:09:39.265055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.265074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.272807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fd640 00:35:10.682 [2024-12-07 10:09:39.273750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.273769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:10.682 [2024-12-07 10:09:39.282709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fd640 00:35:10.682 [2024-12-07 10:09:39.283654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.682 [2024-12-07 10:09:39.283674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.292134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e0630 00:35:10.683 [2024-12-07 10:09:39.293070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.683 [2024-12-07 10:09:39.293089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.300675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fb480 00:35:10.683 [2024-12-07 10:09:39.301677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.683 [2024-12-07 10:09:39.301695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.311056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e38d0 00:35:10.683 [2024-12-07 10:09:39.312200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.683 [2024-12-07 10:09:39.312220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.320340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f0bc0 00:35:10.683 [2024-12-07 10:09:39.321513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.683 [2024-12-07 10:09:39.321532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.329924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f1ca0 00:35:10.683 [2024-12-07 10:09:39.331074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.683 [2024-12-07 10:09:39.331093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.339289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f7da8 00:35:10.683 [2024-12-07 10:09:39.340465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.683 [2024-12-07 10:09:39.340483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.348601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e8088 00:35:10.683 [2024-12-07 10:09:39.349755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.683 [2024-12-07 10:09:39.349774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.357903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e4140 00:35:10.683 [2024-12-07 10:09:39.359055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.683 [2024-12-07 10:09:39.359074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.367203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e5ec8 00:35:10.683 [2024-12-07 10:09:39.368351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.683 [2024-12-07 10:09:39.368370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.376537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fef90 00:35:10.683 [2024-12-07 10:09:39.377707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.683 [2024-12-07 10:09:39.377728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.385800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f9b30 00:35:10.683 [2024-12-07 10:09:39.386955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.683 [2024-12-07 10:09:39.386974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.395099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e6738 00:35:10.683 [2024-12-07 10:09:39.396249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.683 [2024-12-07 10:09:39.396269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.683 [2024-12-07 10:09:39.404613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f4298 00:35:10.943 [2024-12-07 10:09:39.405818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.405840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.414098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fd208 00:35:10.943 [2024-12-07 10:09:39.415250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.415271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.423426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fa7d8 00:35:10.943 [2024-12-07 10:09:39.424576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.424595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.432755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f0788 00:35:10.943 [2024-12-07 10:09:39.433962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.433985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.442105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ecc78 00:35:10.943 [2024-12-07 10:09:39.443305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.443324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.451428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e0630 00:35:10.943 [2024-12-07 10:09:39.452580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.452599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.460697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e3d08 00:35:10.943 [2024-12-07 10:09:39.461847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.461867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.469994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e3498 00:35:10.943 [2024-12-07 10:09:39.471149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.471168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.479398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f1868 00:35:10.943 [2024-12-07 10:09:39.480563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.480582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.488613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e1f80 00:35:10.943 [2024-12-07 10:09:39.489746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.489765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.497951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e84c0 00:35:10.943 [2024-12-07 10:09:39.499102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.499121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.507252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e73e0 00:35:10.943 [2024-12-07 10:09:39.508402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.508421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.516526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ebb98 00:35:10.943 [2024-12-07 10:09:39.517680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.943 [2024-12-07 10:09:39.517699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.943 [2024-12-07 10:09:39.525843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fe720 00:35:10.944 [2024-12-07 10:09:39.526990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.527010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.535206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fb048 00:35:10.944 [2024-12-07 10:09:39.536355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.536374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.545724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e6300 00:35:10.944 [2024-12-07 10:09:39.547333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.547352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.552313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fb480 00:35:10.944 [2024-12-07 10:09:39.553086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.553106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.561572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f5be8 00:35:10.944 [2024-12-07 10:09:39.562493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.562512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.571302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f57b0 00:35:10.944 [2024-12-07 10:09:39.572295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.572316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.581313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f0788 00:35:10.944 [2024-12-07 10:09:39.582428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.582447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.590007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198ff3c8 00:35:10.944 [2024-12-07 10:09:39.590775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.590794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.599275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fdeb0 00:35:10.944 [2024-12-07 10:09:39.600069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.600088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.608791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f7538 00:35:10.944 [2024-12-07 10:09:39.609580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.609601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.618368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f4f40 00:35:10.944 [2024-12-07 10:09:39.618925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.618945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.628383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e49b0 00:35:10.944 [2024-12-07 10:09:39.629090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.629111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.637966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f35f0 00:35:10.944 [2024-12-07 10:09:39.638985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.639005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.647271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f9b30 00:35:10.944 [2024-12-07 10:09:39.648293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.648312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:10.944 [2024-12-07 10:09:39.656573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198fef90 00:35:10.944 [2024-12-07 10:09:39.657598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.944 [2024-12-07 10:09:39.657617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:11.203 [2024-12-07 10:09:39.666113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e6b70 00:35:11.203 [2024-12-07 10:09:39.667150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.203 [2024-12-07 10:09:39.667172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:11.203 [2024-12-07 10:09:39.676818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f0350 00:35:11.203 [2024-12-07 10:09:39.678237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.203 [2024-12-07 10:09:39.678261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:11.203 [2024-12-07 10:09:39.684878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198df550 00:35:11.203 [2024-12-07 10:09:39.685912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.203 [2024-12-07 10:09:39.685932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:11.203 [2024-12-07 10:09:39.694176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198df550 00:35:11.203 [2024-12-07 10:09:39.695191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.203 [2024-12-07 10:09:39.695210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:11.203 [2024-12-07 10:09:39.703493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198df550 00:35:11.203 [2024-12-07 10:09:39.704434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.203 [2024-12-07 10:09:39.704454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:11.203 [2024-12-07 10:09:39.712147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f1430 00:35:11.203 [2024-12-07 10:09:39.713066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.204 [2024-12-07 10:09:39.713086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:11.204 [2024-12-07 10:09:39.722063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f1430 00:35:11.204 [2024-12-07 10:09:39.723075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.204 [2024-12-07 10:09:39.723096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:11.204 [2024-12-07 10:09:39.731366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f1430 00:35:11.204 [2024-12-07 10:09:39.732372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.204 [2024-12-07 10:09:39.732392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:11.204 [2024-12-07 10:09:39.740701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f1430 00:35:11.204 [2024-12-07 10:09:39.741722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.204 [2024-12-07 10:09:39.741742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:11.204 [2024-12-07 10:09:39.751223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f1430 00:35:11.204 [2024-12-07 10:09:39.752622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.204 [2024-12-07 10:09:39.752642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:11.204 [2024-12-07 10:09:39.760662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e8088 00:35:11.204 [2024-12-07 10:09:39.762147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.204 [2024-12-07 10:09:39.762166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:11.204 [2024-12-07 10:09:39.769305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f6890 00:35:11.204 [2024-12-07 10:09:39.770421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.204 [2024-12-07 10:09:39.770441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:11.204 [2024-12-07 10:09:39.778341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198eaef0 00:35:11.204 [2024-12-07 10:09:39.779452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.204 [2024-12-07 10:09:39.779470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:11.204 [2024-12-07 10:09:39.787655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198f4f40 00:35:11.204 [2024-12-07 10:09:39.788779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.204 [2024-12-07 10:09:39.788799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:11.204 [2024-12-07 10:09:39.796388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b0a0) with pdu=0x2000198e1b48 00:35:11.204 [2024-12-07 10:09:39.797841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.204 [2024-12-07 10:09:39.797861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:11.204 27251.00 IOPS, 106.45 MiB/s 00:35:11.204 Latency(us) 00:35:11.204 [2024-12-07T09:09:39.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.204 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:11.204 nvme0n1 : 2.00 27278.55 106.56 0.00 0.00 4686.59 2279.51 12480.33 00:35:11.204 [2024-12-07T09:09:39.930Z] =================================================================================================================== 00:35:11.204 [2024-12-07T09:09:39.930Z] Total : 27278.55 106.56 0.00 0.00 4686.59 2279.51 12480.33 00:35:11.204 { 00:35:11.204 "results": [ 00:35:11.204 { 00:35:11.204 "job": "nvme0n1", 00:35:11.204 "core_mask": "0x2", 00:35:11.204 "workload": "randwrite", 00:35:11.204 "status": "finished", 00:35:11.204 "queue_depth": 128, 00:35:11.204 "io_size": 4096, 00:35:11.204 "runtime": 2.004982, 00:35:11.204 "iops": 27278.54913410694, 00:35:11.204 "mibps": 106.55683255510523, 00:35:11.204 "io_failed": 0, 00:35:11.204 "io_timeout": 0, 00:35:11.204 "avg_latency_us": 4686.586270653823, 00:35:11.204 "min_latency_us": 2279.513043478261, 00:35:11.204 "max_latency_us": 12480.333913043478 00:35:11.204 } 00:35:11.204 ], 00:35:11.204 "core_count": 1 00:35:11.204 } 00:35:11.204 10:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:11.204 10:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:11.204 10:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:11.204 | .driver_specific 00:35:11.204 | .nvme_error 00:35:11.204 | .status_code 00:35:11.204 | .command_transient_transport_error' 00:35:11.204 10:09:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:11.463 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:35:11.463 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1461071 00:35:11.463 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1461071 ']' 00:35:11.463 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1461071 00:35:11.463 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:11.463 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:11.463 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1461071 00:35:11.463 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:11.463 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:11.463 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1461071' 00:35:11.463 killing process with pid 1461071 00:35:11.463 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1461071 00:35:11.463 Received shutdown signal, test time was about 2.000000 seconds 00:35:11.463 00:35:11.463 Latency(us) 00:35:11.463 [2024-12-07T09:09:40.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.463 [2024-12-07T09:09:40.189Z] =================================================================================================================== 00:35:11.463 [2024-12-07T09:09:40.189Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.463 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1461071 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1461666 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1461666 /var/tmp/bperf.sock 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1461666 ']' 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:11.722 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:11.722 [2024-12-07 10:09:40.303938] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:11.722 [2024-12-07 10:09:40.303996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461666 ] 00:35:11.722 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:11.722 Zero copy mechanism will not be used. 00:35:11.722 [2024-12-07 10:09:40.359698] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.722 [2024-12-07 10:09:40.397415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.981 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:11.981 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:11.981 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.981 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.981 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:11.981 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.981 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:11.981 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.981 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.981 10:09:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.549 nvme0n1 00:35:12.549 10:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:12.549 10:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.549 10:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.549 10:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.549 10:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:12.549 10:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.549 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:12.549 Zero copy mechanism will not be used. 00:35:12.549 Running I/O for 2 seconds... 00:35:12.549 [2024-12-07 10:09:41.215413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.549 [2024-12-07 10:09:41.215711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.549 [2024-12-07 10:09:41.215742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.549 [2024-12-07 10:09:41.221835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.549 [2024-12-07 10:09:41.222127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.549 [2024-12-07 10:09:41.222152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.549 [2024-12-07 10:09:41.229185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.549 [2024-12-07 10:09:41.229471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.549 [2024-12-07 10:09:41.229492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.549 [2024-12-07 10:09:41.236682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.549 [2024-12-07 10:09:41.236984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.549 [2024-12-07 10:09:41.237006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.549 [2024-12-07 10:09:41.244079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.549 [2024-12-07 10:09:41.244364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.549 [2024-12-07 10:09:41.244386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.549 [2024-12-07 10:09:41.251634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.549 [2024-12-07 10:09:41.251918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.549 [2024-12-07 10:09:41.251940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.549 [2024-12-07 10:09:41.258410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.549 [2024-12-07 10:09:41.258706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.549 [2024-12-07 10:09:41.258727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.549 [2024-12-07 10:09:41.265450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.550 [2024-12-07 10:09:41.265744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.550 [2024-12-07 10:09:41.265766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.550 [2024-12-07 10:09:41.272132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.550 [2024-12-07 10:09:41.272443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.550 [2024-12-07 10:09:41.272468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.279182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.279474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.279497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.285319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.285613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.285634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.291082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.291386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.291407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.296191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.296483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.296504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.302066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.302351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.302373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.307527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.307824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.307845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.313098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.313395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.313416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.318258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.318532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.318553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.323252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.323533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.323554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.328216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.328498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.328519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.333488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.333763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.333784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.339419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.339699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.339720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.345629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.345910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.345930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.351083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.351378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.351399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.356589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.356881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.356901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.361796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.362083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.362104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.366542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.366822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.366843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.371198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.371480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.371501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.375853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.376140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.376160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.380631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.380914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.380938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.385697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.385986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.386007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.390448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.390728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.390748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.810 [2024-12-07 10:09:41.395529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.810 [2024-12-07 10:09:41.395807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.810 [2024-12-07 10:09:41.395828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.401160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.401458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.401479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.407888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.408195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.408216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.414503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.414783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.414804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.420616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.420914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.420935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.426508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.426593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.426612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.433761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.434069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.434091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.440671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.440961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.440982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.446287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.446574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.446595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.451237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.451521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.451541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.456254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.456542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.456562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.461556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.461842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.461862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.466775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.467066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.467087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.471511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.471797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.471818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.476375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.476667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.476688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.481114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.481408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.481429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.485715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.486001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.486022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.490302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.490583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.490603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.494872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.495161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.495181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.499465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.499749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.499769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.504026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.504310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.504329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.508611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.508898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.508919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.513229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.513516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.513536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.517880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.518171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.518195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.522510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.522795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.522815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.527108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:12.811 [2024-12-07 10:09:41.527387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.811 [2024-12-07 10:09:41.527411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.811 [2024-12-07 10:09:41.531945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.532255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.532279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.536746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.537066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.537088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.541425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.541707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.541729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.546036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.546321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.546342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.550706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.550999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.551020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.555691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.555992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.556013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.561032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.561322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.561345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.566927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.567223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.567243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.573802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.574103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.574124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.580218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.580506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.580526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.586242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.586529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.586550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.591149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.591439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.591459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.596236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.596536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.596557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.601090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.601379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.601400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.606031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.606320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.606340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.610706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.610995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.611016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.615398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.615680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.615701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.620080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.620366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.620386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.624723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.625013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.625035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.629321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.629607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.072 [2024-12-07 10:09:41.629628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.072 [2024-12-07 10:09:41.633925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.072 [2024-12-07 10:09:41.634217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.634237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.638583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.638864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.638885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.643171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.643456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.643476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.647815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.648104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.648128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.652405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.652688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.652708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.657194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.657478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.657499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.662559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.662846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.662867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.667637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.667925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.667945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.672967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.673255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.673275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.679624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.679910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.679930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.686673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.686960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.686980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.694436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.694733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.694753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.701674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.701975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.701995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.708975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.709102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.709120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.715974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.716281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.716301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.722781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.723130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.723150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.729816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.730098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.730118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.735840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.736114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.736150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.741626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.741896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.741915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.747905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.748177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.748198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.754292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.754561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.754585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.760608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.760873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.760893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.766710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.766989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.767009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.772736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.773009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.773029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.778141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.778410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.778429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.784236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.784504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.784523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.073 [2024-12-07 10:09:41.790774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.073 [2024-12-07 10:09:41.791052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.073 [2024-12-07 10:09:41.791074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.796170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.796437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.796460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.801260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.801528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.801549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.806359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.806631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.806652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.811930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.812207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.812228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.817661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.817929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.817955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.823263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.823526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.823546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.828279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.828544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.828564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.833332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.833601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.833621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.838615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.838936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.838962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.844256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.844519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.844538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.849805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.850074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.850094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.856054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.856318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.856337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.862731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.863085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.863104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.870215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.870486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.870505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.878346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.878700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.878720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.885395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.885663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.885683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.891024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.334 [2024-12-07 10:09:41.891293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.334 [2024-12-07 10:09:41.891312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.334 [2024-12-07 10:09:41.897475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.897739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.897758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.902862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.903132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.903151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.907923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.908216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.908239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.912992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.913255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.913274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.918243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.918505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.918525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.923209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.923479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.923498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.927973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.928240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.928259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.932749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.933021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.933040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.937678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.937942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.937967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.943855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.944128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.944149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.950071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.950341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.950360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.956043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.956318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.956338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.961981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.962252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.962271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.967353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.967616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.967636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.972412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.972677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.972697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.977357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.977625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.977645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.982407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.982679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.982699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.987567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.987832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.987852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.992125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.992394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.992413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:41.996865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:41.997145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.335 [2024-12-07 10:09:41.997164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.335 [2024-12-07 10:09:42.001607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.335 [2024-12-07 10:09:42.001871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.001891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.336 [2024-12-07 10:09:42.006158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.336 [2024-12-07 10:09:42.006423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.006443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.336 [2024-12-07 10:09:42.010528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.336 [2024-12-07 10:09:42.010790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.010811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.336 [2024-12-07 10:09:42.014880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.336 [2024-12-07 10:09:42.015150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.015170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.336 [2024-12-07 10:09:42.019206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.336 [2024-12-07 10:09:42.019471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.019490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.336 [2024-12-07 10:09:42.023562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.336 [2024-12-07 10:09:42.023827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.023847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.336 [2024-12-07 10:09:42.027912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.336 [2024-12-07 10:09:42.028202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.028222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.336 [2024-12-07 10:09:42.032447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.336 [2024-12-07 10:09:42.032716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.032736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.336 [2024-12-07 10:09:42.037211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.336 [2024-12-07 10:09:42.037472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.037496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.336 [2024-12-07 10:09:42.042044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.336 [2024-12-07 10:09:42.042309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.042329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.336 [2024-12-07 10:09:42.046451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.336 [2024-12-07 10:09:42.046714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.046735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.336 [2024-12-07 10:09:42.050793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.336 [2024-12-07 10:09:42.051068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.051088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.336 [2024-12-07 10:09:42.055357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.336 [2024-12-07 10:09:42.055630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.336 [2024-12-07 10:09:42.055651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.059804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.060073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.060095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.064231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.064491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.064513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.068591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.068855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.068875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.073105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.073369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.073389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.077811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.078084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.078104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.082481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.082742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.082762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.086955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.087223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.087243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.091307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.091568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.091588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.095758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.096028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.096047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.100359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.100627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.100648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.104765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.105048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.105068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.110271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.110602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.110622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.116635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.116998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.117018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.122040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.122305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.122325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.127309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.127577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.127596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.131880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.132148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.132168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.136349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.136616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.136636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.140889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.141160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.141180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.145273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.145540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.597 [2024-12-07 10:09:42.145560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.597 [2024-12-07 10:09:42.149790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.597 [2024-12-07 10:09:42.150060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.150080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.154208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.154470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.154489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.158629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.158892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.158916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.163117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.163383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.163403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.167456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.167723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.167742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.171964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.172230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.172255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.176642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.176903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.176923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.181029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.181297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.181323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.185788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.186062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.186084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.190528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.190798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.190818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.195474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.195741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.195761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.200755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.201032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.201052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.205457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.205728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.205748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.598 5774.00 IOPS, 721.75 MiB/s [2024-12-07T09:09:42.324Z] [2024-12-07 10:09:42.211066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.211337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.211357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.216098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.216365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.216385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.221223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.221489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.221509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.226594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.226861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.226881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.231972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.232238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.232257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.238776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.239135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.239156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.245247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.245523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.245547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.253161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.253521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.598 [2024-12-07 10:09:42.253540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.598 [2024-12-07 10:09:42.260494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.598 [2024-12-07 10:09:42.260759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.599 [2024-12-07 10:09:42.260779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.599 [2024-12-07 10:09:42.266392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.599 [2024-12-07 10:09:42.266656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.599 [2024-12-07 10:09:42.266675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.599 [2024-12-07 10:09:42.272860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.599 [2024-12-07 10:09:42.273131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.599 [2024-12-07 10:09:42.273152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.599 [2024-12-07 10:09:42.278720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.599 [2024-12-07 10:09:42.278992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.599 [2024-12-07 10:09:42.279012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.599 [2024-12-07 10:09:42.284885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.599 [2024-12-07 10:09:42.285173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.599 [2024-12-07 10:09:42.285193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.599 [2024-12-07 10:09:42.290689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.599 [2024-12-07 10:09:42.290959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.599 [2024-12-07 10:09:42.290979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.599 [2024-12-07 10:09:42.296916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.599 [2024-12-07 10:09:42.297208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.599 [2024-12-07 10:09:42.297228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.599 [2024-12-07 10:09:42.303052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.599 [2024-12-07 10:09:42.303327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.599 [2024-12-07 10:09:42.303346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.599 [2024-12-07 10:09:42.309176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.599 [2024-12-07 10:09:42.309444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.599 [2024-12-07 10:09:42.309464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.599 [2024-12-07 10:09:42.315479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.599 [2024-12-07 10:09:42.315800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.599 [2024-12-07 10:09:42.315822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.859 [2024-12-07 10:09:42.323443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.859 [2024-12-07 10:09:42.323857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.859 [2024-12-07 10:09:42.323878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.859 [2024-12-07 10:09:42.330766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.859 [2024-12-07 10:09:42.331091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.859 [2024-12-07 10:09:42.331113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.859 [2024-12-07 10:09:42.337762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.859 [2024-12-07 10:09:42.338041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.859 [2024-12-07 10:09:42.338063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.859 [2024-12-07 10:09:42.344419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.859 [2024-12-07 10:09:42.344794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.859 [2024-12-07 10:09:42.344815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.859 [2024-12-07 10:09:42.352379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.859 [2024-12-07 10:09:42.352718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.859 [2024-12-07 10:09:42.352738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.859 [2024-12-07 10:09:42.360333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.859 [2024-12-07 10:09:42.360682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.859 [2024-12-07 10:09:42.360702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.859 [2024-12-07 10:09:42.368427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.859 [2024-12-07 10:09:42.368796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.859 [2024-12-07 10:09:42.368816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.376495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.376848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.376868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.384487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.384813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.384833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.391071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.391429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.391450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.398579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.398970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.398990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.404728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.405002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.405021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.410907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.411259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.411280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.417028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.417296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.417315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.422732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.423005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.423029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.428842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.429125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.429146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.434723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.434995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.435014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.440810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.441081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.441100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.446940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.447216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.447236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.452682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.452990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.453010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.459057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.459332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.459353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.464379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.464648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.464668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.469378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.469646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.469665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.473945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.474224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.474243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.478560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.478826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.478846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.483053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.483320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.483340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.487692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.487971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.487992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.492407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.492674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.492694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.496879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.497151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.497171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.501316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.501580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.501600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.505833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.506106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.506125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.510411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.510677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.510702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.514837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.515108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.515128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.860 [2024-12-07 10:09:42.519384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.860 [2024-12-07 10:09:42.519649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.860 [2024-12-07 10:09:42.519670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.861 [2024-12-07 10:09:42.523927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.861 [2024-12-07 10:09:42.524199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.861 [2024-12-07 10:09:42.524219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.861 [2024-12-07 10:09:42.528539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.861 [2024-12-07 10:09:42.528802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.861 [2024-12-07 10:09:42.528822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.861 [2024-12-07 10:09:42.533121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.861 [2024-12-07 10:09:42.533397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.861 [2024-12-07 10:09:42.533417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.861 [2024-12-07 10:09:42.537582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.861 [2024-12-07 10:09:42.537851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.861 [2024-12-07 10:09:42.537871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.861 [2024-12-07 10:09:42.542026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.861 [2024-12-07 10:09:42.542312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.861 [2024-12-07 10:09:42.542332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.861 [2024-12-07 10:09:42.548322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.861 [2024-12-07 10:09:42.548608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.861 [2024-12-07 10:09:42.548628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.861 [2024-12-07 10:09:42.553428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.861 [2024-12-07 10:09:42.553712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.861 [2024-12-07 10:09:42.553732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.861 [2024-12-07 10:09:42.559442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.861 [2024-12-07 10:09:42.559703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.861 [2024-12-07 10:09:42.559723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.861 [2024-12-07 10:09:42.563919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.861 [2024-12-07 10:09:42.564218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.861 [2024-12-07 10:09:42.564238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.861 [2024-12-07 10:09:42.568555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.861 [2024-12-07 10:09:42.568817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.861 [2024-12-07 10:09:42.568837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.861 [2024-12-07 10:09:42.573806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.861 [2024-12-07 10:09:42.574082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.861 [2024-12-07 10:09:42.574102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.861 [2024-12-07 10:09:42.579646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:13.861 [2024-12-07 10:09:42.580012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.861 [2024-12-07 10:09:42.580035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.587361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.587640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.587663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.594235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.594565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.594586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.601195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.601538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.601559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.608356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.608622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.608643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.615665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.616052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.616073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.623724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.624132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.624153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.631984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.632330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.632350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.640364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.640776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.640797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.648650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.649040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.649060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.657157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.657458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.657477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.666046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.666403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.666423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.674223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.674567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.674591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.682481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.682854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.682875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.690752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.691121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.691142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.697882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.698255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.698275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.706317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.706629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.706649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.712909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.713193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.713213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.719878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.720193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.720223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.726528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.121 [2024-12-07 10:09:42.726852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.121 [2024-12-07 10:09:42.726873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.121 [2024-12-07 10:09:42.733466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.733741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.733761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.739604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.739915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.739936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.746177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.746459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.746480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.752190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.752456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.752476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.759224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.759512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.759532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.766198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.766569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.766590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.773513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.773864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.773884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.780447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.780758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.780778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.787012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.787311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.787331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.794209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.794555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.794575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.802346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.802715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.802736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.809921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.810261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.810282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.817471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.817806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.817827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.825171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.825534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.825555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.831526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.831800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.831821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.836418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.836710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.836731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.122 [2024-12-07 10:09:42.841476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.122 [2024-12-07 10:09:42.841759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.122 [2024-12-07 10:09:42.841781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.382 [2024-12-07 10:09:42.846870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.382 [2024-12-07 10:09:42.847170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.382 [2024-12-07 10:09:42.847193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.382 [2024-12-07 10:09:42.852121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.382 [2024-12-07 10:09:42.852402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.382 [2024-12-07 10:09:42.852429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.382 [2024-12-07 10:09:42.858307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.382 [2024-12-07 10:09:42.858618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.382 [2024-12-07 10:09:42.858640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.382 [2024-12-07 10:09:42.864786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.382 [2024-12-07 10:09:42.865077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.382 [2024-12-07 10:09:42.865097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.382 [2024-12-07 10:09:42.871415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.382 [2024-12-07 10:09:42.871755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.382 [2024-12-07 10:09:42.871776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.382 [2024-12-07 10:09:42.878575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.382 [2024-12-07 10:09:42.878928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.382 [2024-12-07 10:09:42.878955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.382 [2024-12-07 10:09:42.885915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.382 [2024-12-07 10:09:42.886251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.382 [2024-12-07 10:09:42.886271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.382 [2024-12-07 10:09:42.893312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.382 [2024-12-07 10:09:42.893677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.382 [2024-12-07 10:09:42.893697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.382 [2024-12-07 10:09:42.900934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.382 [2024-12-07 10:09:42.901285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.382 [2024-12-07 10:09:42.901305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.382 [2024-12-07 10:09:42.908654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.382 [2024-12-07 10:09:42.908985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.382 [2024-12-07 10:09:42.909005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.382 [2024-12-07 10:09:42.916261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.382 [2024-12-07 10:09:42.916639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.382 [2024-12-07 10:09:42.916660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.382 [2024-12-07 10:09:42.923593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:42.923935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:42.923962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:42.931090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:42.931476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:42.931496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:42.938917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:42.939286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:42.939306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:42.946430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:42.946829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:42.946849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:42.954583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:42.954957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:42.954976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:42.962749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:42.963075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:42.963095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:42.969241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:42.969507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:42.969527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:42.975829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:42.976105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:42.976125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:42.982401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:42.982689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:42.982709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:42.989507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:42.989831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:42.989851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:42.996378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:42.996707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:42.996728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.001848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.002121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.002141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.006668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.006933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.006961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.011282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.011547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.011567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.015915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.016199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.016230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.020437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.020699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.020718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.024922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.025198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.025233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.029392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.029657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.029676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.033835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.034105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.034125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.038299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.038564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.038584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.042727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.043001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.043021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.047140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.047404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.047425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.051578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.051843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.051863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.056104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.056370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.056389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.061303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.061569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.061589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.065839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.066112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.066132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.070315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.383 [2024-12-07 10:09:43.070579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.383 [2024-12-07 10:09:43.070599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.383 [2024-12-07 10:09:43.074894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.384 [2024-12-07 10:09:43.075161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-12-07 10:09:43.075197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.384 [2024-12-07 10:09:43.080168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.384 [2024-12-07 10:09:43.080431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-12-07 10:09:43.080451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.384 [2024-12-07 10:09:43.084703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.384 [2024-12-07 10:09:43.084972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-12-07 10:09:43.084991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.384 [2024-12-07 10:09:43.089177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.384 [2024-12-07 10:09:43.089443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-12-07 10:09:43.089463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.384 [2024-12-07 10:09:43.093659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.384 [2024-12-07 10:09:43.093938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-12-07 10:09:43.093963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.384 [2024-12-07 10:09:43.098163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.384 [2024-12-07 10:09:43.098429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-12-07 10:09:43.098449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.384 [2024-12-07 10:09:43.102693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.384 [2024-12-07 10:09:43.102976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.384 [2024-12-07 10:09:43.103001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.643 [2024-12-07 10:09:43.107251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.643 [2024-12-07 10:09:43.107517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-12-07 10:09:43.107540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.643 [2024-12-07 10:09:43.111815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.643 [2024-12-07 10:09:43.112087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-12-07 10:09:43.112108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.643 [2024-12-07 10:09:43.116272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.643 [2024-12-07 10:09:43.116536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.643 [2024-12-07 10:09:43.116562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.120712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.120983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.121004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.125161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.125428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.125448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.129707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.129978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.129998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.134134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.134400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.134420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.138586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.138851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.138871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.143073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.143342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.143362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.147490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.147754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.147774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.151968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.152231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.152251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.156430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.156696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.156716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.160833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.161105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.161124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.165281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.165546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.165566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.169734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.170021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.170041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.174210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.174475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.174496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.178653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.178920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.178940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.183093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.183356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.183376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.187518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.187784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.187804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.191942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.192214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.192234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.196377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.196642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.196662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.200865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.201142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.201163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.205282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.205545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.205565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.644 [2024-12-07 10:09:43.209625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x218b580) with pdu=0x2000198fef90 00:35:14.644 [2024-12-07 10:09:43.210827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.644 [2024-12-07 10:09:43.210849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.644 5501.50 IOPS, 687.69 MiB/s 00:35:14.644 Latency(us) 00:35:14.644 [2024-12-07T09:09:43.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.644 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:14.644 nvme0n1 : 2.00 5500.82 687.60 0.00 0.00 2904.60 1937.59 8833.11 00:35:14.644 [2024-12-07T09:09:43.370Z] =================================================================================================================== 00:35:14.644 [2024-12-07T09:09:43.370Z] Total : 5500.82 687.60 0.00 0.00 2904.60 1937.59 8833.11 00:35:14.644 { 00:35:14.644 "results": [ 00:35:14.644 { 00:35:14.644 "job": "nvme0n1", 00:35:14.644 "core_mask": "0x2", 00:35:14.644 "workload": "randwrite", 00:35:14.644 "status": "finished", 00:35:14.644 "queue_depth": 16, 00:35:14.644 "io_size": 131072, 00:35:14.644 "runtime": 2.003518, 00:35:14.644 "iops": 5500.824050495179, 00:35:14.644 "mibps": 687.6030063118974, 00:35:14.644 "io_failed": 0, 00:35:14.644 "io_timeout": 0, 00:35:14.644 "avg_latency_us": 2904.5958370383814, 00:35:14.644 "min_latency_us": 1937.5860869565217, 00:35:14.644 "max_latency_us": 8833.11304347826 00:35:14.644 } 00:35:14.644 ], 00:35:14.644 "core_count": 1 00:35:14.644 } 00:35:14.644 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:14.644 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:14.644 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:14.644 | .driver_specific 00:35:14.644 | .nvme_error 00:35:14.644 | .status_code 00:35:14.644 | .command_transient_transport_error' 00:35:14.645 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:14.912 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 355 > 0 )) 00:35:14.912 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1461666 00:35:14.912 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1461666 ']' 00:35:14.912 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1461666 00:35:14.912 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:14.912 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:14.912 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1461666 00:35:14.912 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:14.912 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:14.912 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1461666' 00:35:14.912 killing process with pid 1461666 00:35:14.912 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1461666 00:35:14.912 Received shutdown signal, test time was about 2.000000 seconds 00:35:14.912 00:35:14.912 Latency(us) 00:35:14.912 [2024-12-07T09:09:43.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.912 [2024-12-07T09:09:43.638Z] =================================================================================================================== 00:35:14.912 [2024-12-07T09:09:43.638Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.912 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1461666 00:35:15.172 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1460009 00:35:15.172 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1460009 ']' 00:35:15.172 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1460009 00:35:15.172 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:15.172 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:15.172 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1460009 00:35:15.172 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:15.172 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:15.172 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1460009' 00:35:15.172 killing process with pid 1460009 00:35:15.172 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1460009 00:35:15.172 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1460009 00:35:15.430 00:35:15.430 real 0m13.930s 00:35:15.430 user 0m26.784s 00:35:15.430 sys 0m4.324s 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.430 ************************************ 00:35:15.430 END TEST nvmf_digest_error 00:35:15.430 ************************************ 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:15.430 rmmod nvme_tcp 00:35:15.430 rmmod nvme_fabrics 00:35:15.430 rmmod nvme_keyring 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 1460009 ']' 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 1460009 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1460009 ']' 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1460009 00:35:15.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1460009) - No such process 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1460009 is not found' 00:35:15.430 Process with pid 1460009 is not found 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:15.430 10:09:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:35:15.430 10:09:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:15.430 10:09:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:15.430 10:09:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.430 10:09:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:15.430 10:09:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:17.962 00:35:17.962 real 0m35.789s 00:35:17.962 user 0m55.045s 00:35:17.962 sys 0m13.099s 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:17.962 ************************************ 00:35:17.962 END TEST nvmf_digest 00:35:17.962 ************************************ 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.962 ************************************ 00:35:17.962 START TEST nvmf_bdevperf 00:35:17.962 ************************************ 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:17.962 * Looking for test storage... 00:35:17.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:17.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.962 --rc genhtml_branch_coverage=1 00:35:17.962 --rc genhtml_function_coverage=1 00:35:17.962 --rc genhtml_legend=1 00:35:17.962 --rc geninfo_all_blocks=1 00:35:17.962 --rc geninfo_unexecuted_blocks=1 00:35:17.962 00:35:17.962 ' 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:17.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.962 --rc genhtml_branch_coverage=1 00:35:17.962 --rc genhtml_function_coverage=1 00:35:17.962 --rc genhtml_legend=1 00:35:17.962 --rc geninfo_all_blocks=1 00:35:17.962 --rc geninfo_unexecuted_blocks=1 00:35:17.962 00:35:17.962 ' 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:17.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.962 --rc genhtml_branch_coverage=1 00:35:17.962 --rc genhtml_function_coverage=1 00:35:17.962 --rc genhtml_legend=1 00:35:17.962 --rc geninfo_all_blocks=1 00:35:17.962 --rc geninfo_unexecuted_blocks=1 00:35:17.962 00:35:17.962 ' 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:17.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.962 --rc genhtml_branch_coverage=1 00:35:17.962 --rc genhtml_function_coverage=1 00:35:17.962 --rc genhtml_legend=1 00:35:17.962 --rc geninfo_all_blocks=1 00:35:17.962 --rc geninfo_unexecuted_blocks=1 00:35:17.962 00:35:17.962 ' 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.962 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:17.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:17.963 10:09:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:23.235 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:23.235 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:23.235 Found net devices under 0000:86:00.0: cvl_0_0 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:23.235 Found net devices under 0000:86:00.1: cvl_0_1 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:23.235 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:23.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:23.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:35:23.236 00:35:23.236 --- 10.0.0.2 ping statistics --- 00:35:23.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.236 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:23.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:23.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:35:23.236 00:35:23.236 --- 10.0.0.1 ping statistics --- 00:35:23.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.236 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=1465676 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 1465676 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1465676 ']' 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:23.236 10:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.236 [2024-12-07 10:09:51.899453] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:23.236 [2024-12-07 10:09:51.899498] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:23.236 [2024-12-07 10:09:51.953953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:23.495 [2024-12-07 10:09:51.996658] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:23.495 [2024-12-07 10:09:51.996697] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:23.495 [2024-12-07 10:09:51.996705] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:23.495 [2024-12-07 10:09:51.996711] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:23.495 [2024-12-07 10:09:51.996716] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:23.495 [2024-12-07 10:09:51.996822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:23.495 [2024-12-07 10:09:51.996928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:23.495 [2024-12-07 10:09:51.996929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.495 [2024-12-07 10:09:52.128316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.495 Malloc0 00:35:23.495 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.496 [2024-12-07 10:09:52.186549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:23.496 { 00:35:23.496 "params": { 00:35:23.496 "name": "Nvme$subsystem", 00:35:23.496 "trtype": "$TEST_TRANSPORT", 00:35:23.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.496 "adrfam": "ipv4", 00:35:23.496 "trsvcid": "$NVMF_PORT", 00:35:23.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.496 "hdgst": ${hdgst:-false}, 00:35:23.496 "ddgst": ${ddgst:-false} 00:35:23.496 }, 00:35:23.496 "method": "bdev_nvme_attach_controller" 00:35:23.496 } 00:35:23.496 EOF 00:35:23.496 )") 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:35:23.496 10:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:23.496 "params": { 00:35:23.496 "name": "Nvme1", 00:35:23.496 "trtype": "tcp", 00:35:23.496 "traddr": "10.0.0.2", 00:35:23.496 "adrfam": "ipv4", 00:35:23.496 "trsvcid": "4420", 00:35:23.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:23.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:23.496 "hdgst": false, 00:35:23.496 "ddgst": false 00:35:23.496 }, 00:35:23.496 "method": "bdev_nvme_attach_controller" 00:35:23.496 }' 00:35:23.755 [2024-12-07 10:09:52.239637] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:23.755 [2024-12-07 10:09:52.239679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465708 ] 00:35:23.755 [2024-12-07 10:09:52.293931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.755 [2024-12-07 10:09:52.334100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.013 Running I/O for 1 seconds... 00:35:24.948 10742.00 IOPS, 41.96 MiB/s 00:35:24.948 Latency(us) 00:35:24.948 [2024-12-07T09:09:53.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.948 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:24.948 Verification LBA range: start 0x0 length 0x4000 00:35:24.948 Nvme1n1 : 1.00 10829.54 42.30 0.00 0.00 11776.08 1495.93 12480.33 00:35:24.948 [2024-12-07T09:09:53.674Z] =================================================================================================================== 00:35:24.948 [2024-12-07T09:09:53.675Z] Total : 10829.54 42.30 0.00 0.00 11776.08 1495.93 12480.33 00:35:25.207 10:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1465937 00:35:25.207 10:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:25.207 10:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:25.207 10:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:25.207 10:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:35:25.207 10:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:35:25.207 10:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:25.207 10:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:25.207 { 00:35:25.207 "params": { 00:35:25.207 "name": "Nvme$subsystem", 00:35:25.207 "trtype": "$TEST_TRANSPORT", 00:35:25.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.207 "adrfam": "ipv4", 00:35:25.207 "trsvcid": "$NVMF_PORT", 00:35:25.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.207 "hdgst": ${hdgst:-false}, 00:35:25.207 "ddgst": ${ddgst:-false} 00:35:25.207 }, 00:35:25.207 "method": "bdev_nvme_attach_controller" 00:35:25.207 } 00:35:25.207 EOF 00:35:25.207 )") 00:35:25.207 10:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:35:25.207 10:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:35:25.207 10:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:35:25.207 10:09:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:25.207 "params": { 00:35:25.207 "name": "Nvme1", 00:35:25.207 "trtype": "tcp", 00:35:25.207 "traddr": "10.0.0.2", 00:35:25.207 "adrfam": "ipv4", 00:35:25.207 "trsvcid": "4420", 00:35:25.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:25.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:25.207 "hdgst": false, 00:35:25.207 "ddgst": false 00:35:25.207 }, 00:35:25.207 "method": "bdev_nvme_attach_controller" 00:35:25.207 }' 00:35:25.207 [2024-12-07 10:09:53.838912] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:25.207 [2024-12-07 10:09:53.838967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465937 ] 00:35:25.207 [2024-12-07 10:09:53.893872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.466 [2024-12-07 10:09:53.932175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.466 Running I/O for 15 seconds... 00:35:27.775 10721.00 IOPS, 41.88 MiB/s [2024-12-07T09:09:57.069Z] 10827.50 IOPS, 42.29 MiB/s [2024-12-07T09:09:57.070Z] 10:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1465676 00:35:28.344 10:09:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:28.344 [2024-12-07 10:09:56.808575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.808936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.808943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.809079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.809086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.809094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.809102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.809110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.809117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.809125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.809132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.809140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.809147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.809156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.809162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.809171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.809178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.344 [2024-12-07 10:09:56.809186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.344 [2024-12-07 10:09:56.809193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.345 [2024-12-07 10:09:56.809682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.345 [2024-12-07 10:09:56.809692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.809988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.809997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.810003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.810011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.810018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.810027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.346 [2024-12-07 10:09:56.810034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.810042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.346 [2024-12-07 10:09:56.810049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.810059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.346 [2024-12-07 10:09:56.810066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.810078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.346 [2024-12-07 10:09:56.810085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.810094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.346 [2024-12-07 10:09:56.810101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.810109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.346 [2024-12-07 10:09:56.810116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.810124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.346 [2024-12-07 10:09:56.810130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.810139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.346 [2024-12-07 10:09:56.810145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.810154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.346 [2024-12-07 10:09:56.810160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.810168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.346 [2024-12-07 10:09:56.810175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.346 [2024-12-07 10:09:56.810183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.347 [2024-12-07 10:09:56.810282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.347 [2024-12-07 10:09:56.810642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.347 [2024-12-07 10:09:56.810652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.348 [2024-12-07 10:09:56.810659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.348 [2024-12-07 10:09:56.810667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.348 [2024-12-07 10:09:56.810674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.348 [2024-12-07 10:09:56.810686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.348 [2024-12-07 10:09:56.810693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.348 [2024-12-07 10:09:56.810701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.348 [2024-12-07 10:09:56.810707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.348 [2024-12-07 10:09:56.810715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.348 [2024-12-07 10:09:56.810722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.348 [2024-12-07 10:09:56.810730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.348 [2024-12-07 10:09:56.810737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.348 [2024-12-07 10:09:56.810745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.348 [2024-12-07 10:09:56.810752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.348 [2024-12-07 10:09:56.810760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.348 [2024-12-07 10:09:56.810766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.348 [2024-12-07 10:09:56.810774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23939c0 is same with the state(6) to be set 00:35:28.348 [2024-12-07 10:09:56.810782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:28.348 [2024-12-07 10:09:56.810788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:28.348 [2024-12-07 10:09:56.810794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96176 len:8 PRP1 0x0 PRP2 0x0 00:35:28.348 [2024-12-07 10:09:56.810802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.348 [2024-12-07 10:09:56.810845] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23939c0 was disconnected and freed. reset controller. 00:35:28.348 [2024-12-07 10:09:56.813674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.348 [2024-12-07 10:09:56.813730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.348 [2024-12-07 10:09:56.814306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.348 [2024-12-07 10:09:56.814324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.348 [2024-12-07 10:09:56.814339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.348 [2024-12-07 10:09:56.814518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.348 [2024-12-07 10:09:56.814695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.348 [2024-12-07 10:09:56.814703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.348 [2024-12-07 10:09:56.814711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.348 [2024-12-07 10:09:56.817542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.348 [2024-12-07 10:09:56.827070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.348 [2024-12-07 10:09:56.827506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.348 [2024-12-07 10:09:56.827523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.348 [2024-12-07 10:09:56.827531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.348 [2024-12-07 10:09:56.827709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.348 [2024-12-07 10:09:56.827887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.348 [2024-12-07 10:09:56.827895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.348 [2024-12-07 10:09:56.827902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.348 [2024-12-07 10:09:56.830730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.348 [2024-12-07 10:09:56.839990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.348 [2024-12-07 10:09:56.840372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.348 [2024-12-07 10:09:56.840389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.348 [2024-12-07 10:09:56.840396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.348 [2024-12-07 10:09:56.840568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.348 [2024-12-07 10:09:56.840741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.348 [2024-12-07 10:09:56.840749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.348 [2024-12-07 10:09:56.840755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.348 [2024-12-07 10:09:56.843451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.348 [2024-12-07 10:09:56.852802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.348 [2024-12-07 10:09:56.853272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.348 [2024-12-07 10:09:56.853290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.348 [2024-12-07 10:09:56.853297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.348 [2024-12-07 10:09:56.853469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.348 [2024-12-07 10:09:56.853640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.348 [2024-12-07 10:09:56.853652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.348 [2024-12-07 10:09:56.853658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.348 [2024-12-07 10:09:56.856344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.348 [2024-12-07 10:09:56.865767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.348 [2024-12-07 10:09:56.866208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.348 [2024-12-07 10:09:56.866225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.348 [2024-12-07 10:09:56.866232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.348 [2024-12-07 10:09:56.866403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.348 [2024-12-07 10:09:56.866574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.349 [2024-12-07 10:09:56.866583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.349 [2024-12-07 10:09:56.866589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.349 [2024-12-07 10:09:56.869371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.349 [2024-12-07 10:09:56.878738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.349 [2024-12-07 10:09:56.879189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.349 [2024-12-07 10:09:56.879205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.349 [2024-12-07 10:09:56.879212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.349 [2024-12-07 10:09:56.879374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.349 [2024-12-07 10:09:56.879536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.349 [2024-12-07 10:09:56.879543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.349 [2024-12-07 10:09:56.879549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.349 [2024-12-07 10:09:56.882240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.349 [2024-12-07 10:09:56.891685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.349 [2024-12-07 10:09:56.892070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.349 [2024-12-07 10:09:56.892088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.349 [2024-12-07 10:09:56.892096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.349 [2024-12-07 10:09:56.892273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.349 [2024-12-07 10:09:56.892451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.349 [2024-12-07 10:09:56.892459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.349 [2024-12-07 10:09:56.892465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.349 [2024-12-07 10:09:56.895204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.349 [2024-12-07 10:09:56.904607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.349 [2024-12-07 10:09:56.904965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.349 [2024-12-07 10:09:56.905010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.349 [2024-12-07 10:09:56.905033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.349 [2024-12-07 10:09:56.905612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.349 [2024-12-07 10:09:56.905832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.349 [2024-12-07 10:09:56.905839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.349 [2024-12-07 10:09:56.905846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.349 [2024-12-07 10:09:56.908553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.349 [2024-12-07 10:09:56.917534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.349 [2024-12-07 10:09:56.917838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.349 [2024-12-07 10:09:56.917853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.349 [2024-12-07 10:09:56.917861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.349 [2024-12-07 10:09:56.918038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.349 [2024-12-07 10:09:56.918210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.349 [2024-12-07 10:09:56.918218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.349 [2024-12-07 10:09:56.918224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.349 [2024-12-07 10:09:56.920901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.349 [2024-12-07 10:09:56.930615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.349 [2024-12-07 10:09:56.931022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.349 [2024-12-07 10:09:56.931040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.349 [2024-12-07 10:09:56.931048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.349 [2024-12-07 10:09:56.931225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.349 [2024-12-07 10:09:56.931406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.349 [2024-12-07 10:09:56.931415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.349 [2024-12-07 10:09:56.931421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.349 [2024-12-07 10:09:56.934249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.349 [2024-12-07 10:09:56.943532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.349 [2024-12-07 10:09:56.943908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.349 [2024-12-07 10:09:56.943925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.349 [2024-12-07 10:09:56.943932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.349 [2024-12-07 10:09:56.944113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.349 [2024-12-07 10:09:56.944284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.349 [2024-12-07 10:09:56.944292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.349 [2024-12-07 10:09:56.944299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.349 [2024-12-07 10:09:56.947006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.350 [2024-12-07 10:09:56.956509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.350 [2024-12-07 10:09:56.956874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.350 [2024-12-07 10:09:56.956890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.350 [2024-12-07 10:09:56.956897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.350 [2024-12-07 10:09:56.957072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.350 [2024-12-07 10:09:56.957244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.350 [2024-12-07 10:09:56.957252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.350 [2024-12-07 10:09:56.957258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.350 [2024-12-07 10:09:56.959971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.350 [2024-12-07 10:09:56.969564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.350 [2024-12-07 10:09:56.969879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.350 [2024-12-07 10:09:56.969895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.350 [2024-12-07 10:09:56.969902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.350 [2024-12-07 10:09:56.970079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.350 [2024-12-07 10:09:56.970251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.350 [2024-12-07 10:09:56.970259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.350 [2024-12-07 10:09:56.970265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.350 [2024-12-07 10:09:56.973014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.350 [2024-12-07 10:09:56.982524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.350 [2024-12-07 10:09:56.982891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.350 [2024-12-07 10:09:56.982907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.350 [2024-12-07 10:09:56.982915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.350 [2024-12-07 10:09:56.983091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.350 [2024-12-07 10:09:56.983264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.350 [2024-12-07 10:09:56.983272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.350 [2024-12-07 10:09:56.983283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.350 [2024-12-07 10:09:56.985963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.350 [2024-12-07 10:09:56.995485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.350 [2024-12-07 10:09:56.995790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.350 [2024-12-07 10:09:56.995829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.350 [2024-12-07 10:09:56.995853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.350 [2024-12-07 10:09:56.996447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.350 [2024-12-07 10:09:56.997039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.350 [2024-12-07 10:09:56.997066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.350 [2024-12-07 10:09:56.997087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.350 [2024-12-07 10:09:56.999776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.350 [2024-12-07 10:09:57.008446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.350 [2024-12-07 10:09:57.008818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.350 [2024-12-07 10:09:57.008861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.350 [2024-12-07 10:09:57.008884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.350 [2024-12-07 10:09:57.009376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.350 [2024-12-07 10:09:57.009549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.350 [2024-12-07 10:09:57.009557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.350 [2024-12-07 10:09:57.009563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.350 [2024-12-07 10:09:57.012240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.350 [2024-12-07 10:09:57.021393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.350 [2024-12-07 10:09:57.021773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.350 [2024-12-07 10:09:57.021816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.350 [2024-12-07 10:09:57.021838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.350 [2024-12-07 10:09:57.022320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.350 [2024-12-07 10:09:57.022493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.350 [2024-12-07 10:09:57.022500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.350 [2024-12-07 10:09:57.022507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.350 [2024-12-07 10:09:57.025186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.350 [2024-12-07 10:09:57.034328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.350 [2024-12-07 10:09:57.034618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.350 [2024-12-07 10:09:57.034638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.350 [2024-12-07 10:09:57.034645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.350 [2024-12-07 10:09:57.034816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.350 [2024-12-07 10:09:57.034993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.350 [2024-12-07 10:09:57.035002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.350 [2024-12-07 10:09:57.035008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.350 [2024-12-07 10:09:57.037683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.350 [2024-12-07 10:09:57.047228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.350 [2024-12-07 10:09:57.047549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.350 [2024-12-07 10:09:57.047565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.350 [2024-12-07 10:09:57.047572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.350 [2024-12-07 10:09:57.047744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.350 [2024-12-07 10:09:57.047919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.350 [2024-12-07 10:09:57.047927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.350 [2024-12-07 10:09:57.047934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.351 [2024-12-07 10:09:57.050662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.351 [2024-12-07 10:09:57.060245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.351 [2024-12-07 10:09:57.060647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.351 [2024-12-07 10:09:57.060664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.351 [2024-12-07 10:09:57.060671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.351 [2024-12-07 10:09:57.060848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.351 [2024-12-07 10:09:57.061030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.351 [2024-12-07 10:09:57.061039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.351 [2024-12-07 10:09:57.061046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.351 [2024-12-07 10:09:57.063892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.610 [2024-12-07 10:09:57.073413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.610 [2024-12-07 10:09:57.073770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.610 [2024-12-07 10:09:57.073787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.610 [2024-12-07 10:09:57.073795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.610 [2024-12-07 10:09:57.073976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.611 [2024-12-07 10:09:57.074158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.611 [2024-12-07 10:09:57.074167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.611 [2024-12-07 10:09:57.074175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.611 [2024-12-07 10:09:57.076999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.611 [2024-12-07 10:09:57.086514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.611 [2024-12-07 10:09:57.086859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.611 [2024-12-07 10:09:57.086877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.611 [2024-12-07 10:09:57.086885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.611 [2024-12-07 10:09:57.087071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.611 [2024-12-07 10:09:57.087249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.611 [2024-12-07 10:09:57.087258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.611 [2024-12-07 10:09:57.087266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.611 [2024-12-07 10:09:57.090096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.611 [2024-12-07 10:09:57.099619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.611 [2024-12-07 10:09:57.100072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.611 [2024-12-07 10:09:57.100088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.611 [2024-12-07 10:09:57.100096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.611 [2024-12-07 10:09:57.100272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.611 [2024-12-07 10:09:57.100449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.611 [2024-12-07 10:09:57.100457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.611 [2024-12-07 10:09:57.100464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.611 [2024-12-07 10:09:57.103388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.611 [2024-12-07 10:09:57.112732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.611 [2024-12-07 10:09:57.113181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.611 [2024-12-07 10:09:57.113199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.611 [2024-12-07 10:09:57.113207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.611 [2024-12-07 10:09:57.113384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.611 [2024-12-07 10:09:57.113561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.611 [2024-12-07 10:09:57.113569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.611 [2024-12-07 10:09:57.113576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.611 [2024-12-07 10:09:57.116404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.611 [2024-12-07 10:09:57.125920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.611 [2024-12-07 10:09:57.126251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.611 [2024-12-07 10:09:57.126295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.611 [2024-12-07 10:09:57.126318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.611 [2024-12-07 10:09:57.126897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.611 [2024-12-07 10:09:57.127456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.611 [2024-12-07 10:09:57.127465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.611 [2024-12-07 10:09:57.127471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.611 [2024-12-07 10:09:57.130297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.611 [2024-12-07 10:09:57.139055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.611 [2024-12-07 10:09:57.139514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.611 [2024-12-07 10:09:57.139530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.611 [2024-12-07 10:09:57.139538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.611 [2024-12-07 10:09:57.139714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.611 [2024-12-07 10:09:57.139891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.611 [2024-12-07 10:09:57.139899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.611 [2024-12-07 10:09:57.139905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.611 [2024-12-07 10:09:57.142729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.611 [2024-12-07 10:09:57.152158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.611 [2024-12-07 10:09:57.152461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.611 [2024-12-07 10:09:57.152504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.611 [2024-12-07 10:09:57.152527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.611 [2024-12-07 10:09:57.153121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.611 [2024-12-07 10:09:57.153613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.611 [2024-12-07 10:09:57.153624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.611 [2024-12-07 10:09:57.153633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.611 [2024-12-07 10:09:57.157690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.611 [2024-12-07 10:09:57.165695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.611 [2024-12-07 10:09:57.166154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.611 [2024-12-07 10:09:57.166171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.611 [2024-12-07 10:09:57.166182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.611 [2024-12-07 10:09:57.166353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.611 [2024-12-07 10:09:57.166525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.611 [2024-12-07 10:09:57.166533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.611 [2024-12-07 10:09:57.166539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.611 [2024-12-07 10:09:57.169374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.611 9453.00 IOPS, 36.93 MiB/s [2024-12-07T09:09:57.337Z] [2024-12-07 10:09:57.180201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.611 [2024-12-07 10:09:57.180556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.611 [2024-12-07 10:09:57.180572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.611 [2024-12-07 10:09:57.180580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.611 [2024-12-07 10:09:57.180756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.611 [2024-12-07 10:09:57.180933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.611 [2024-12-07 10:09:57.180941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.611 [2024-12-07 10:09:57.180955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.611 [2024-12-07 10:09:57.183778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.611 [2024-12-07 10:09:57.193299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.611 [2024-12-07 10:09:57.193605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.611 [2024-12-07 10:09:57.193622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.611 [2024-12-07 10:09:57.193630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.611 [2024-12-07 10:09:57.193805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.611 [2024-12-07 10:09:57.193989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.611 [2024-12-07 10:09:57.193998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.611 [2024-12-07 10:09:57.194004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.611 [2024-12-07 10:09:57.196828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.611 [2024-12-07 10:09:57.206343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.611 [2024-12-07 10:09:57.206711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.611 [2024-12-07 10:09:57.206727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.611 [2024-12-07 10:09:57.206735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.612 [2024-12-07 10:09:57.206912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.612 [2024-12-07 10:09:57.207099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.612 [2024-12-07 10:09:57.207108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.612 [2024-12-07 10:09:57.207115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.612 [2024-12-07 10:09:57.209988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.612 [2024-12-07 10:09:57.219536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.612 [2024-12-07 10:09:57.219889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.612 [2024-12-07 10:09:57.219906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.612 [2024-12-07 10:09:57.219914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.612 [2024-12-07 10:09:57.220096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.612 [2024-12-07 10:09:57.220273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.612 [2024-12-07 10:09:57.220281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.612 [2024-12-07 10:09:57.220288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.612 [2024-12-07 10:09:57.223116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.612 [2024-12-07 10:09:57.232643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.612 [2024-12-07 10:09:57.233090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.612 [2024-12-07 10:09:57.233107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.612 [2024-12-07 10:09:57.233114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.612 [2024-12-07 10:09:57.233290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.612 [2024-12-07 10:09:57.233468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.612 [2024-12-07 10:09:57.233476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.612 [2024-12-07 10:09:57.233482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.612 [2024-12-07 10:09:57.236315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.612 [2024-12-07 10:09:57.245799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.612 [2024-12-07 10:09:57.246128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.612 [2024-12-07 10:09:57.246176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.612 [2024-12-07 10:09:57.246199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.612 [2024-12-07 10:09:57.246747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.612 [2024-12-07 10:09:57.246925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.612 [2024-12-07 10:09:57.246933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.612 [2024-12-07 10:09:57.246939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.612 [2024-12-07 10:09:57.249728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.612 [2024-12-07 10:09:57.258871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.612 [2024-12-07 10:09:57.259206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.612 [2024-12-07 10:09:57.259223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.612 [2024-12-07 10:09:57.259230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.612 [2024-12-07 10:09:57.259406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.612 [2024-12-07 10:09:57.259583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.612 [2024-12-07 10:09:57.259591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.612 [2024-12-07 10:09:57.259597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.612 [2024-12-07 10:09:57.262391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.612 [2024-12-07 10:09:57.271773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.612 [2024-12-07 10:09:57.272165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.612 [2024-12-07 10:09:57.272183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.612 [2024-12-07 10:09:57.272190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.612 [2024-12-07 10:09:57.272362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.612 [2024-12-07 10:09:57.272534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.612 [2024-12-07 10:09:57.272541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.612 [2024-12-07 10:09:57.272548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.612 [2024-12-07 10:09:57.275276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.612 [2024-12-07 10:09:57.284671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.612 [2024-12-07 10:09:57.285015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.612 [2024-12-07 10:09:57.285032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.612 [2024-12-07 10:09:57.285040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.612 [2024-12-07 10:09:57.285211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.612 [2024-12-07 10:09:57.285383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.612 [2024-12-07 10:09:57.285391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.612 [2024-12-07 10:09:57.285397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.612 [2024-12-07 10:09:57.288152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.612 [2024-12-07 10:09:57.297596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.612 [2024-12-07 10:09:57.297929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.612 [2024-12-07 10:09:57.297945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.612 [2024-12-07 10:09:57.297961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.612 [2024-12-07 10:09:57.298132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.612 [2024-12-07 10:09:57.298304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.612 [2024-12-07 10:09:57.298311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.612 [2024-12-07 10:09:57.298318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.612 [2024-12-07 10:09:57.300993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.612 [2024-12-07 10:09:57.310459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.612 [2024-12-07 10:09:57.310811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.612 [2024-12-07 10:09:57.310827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.612 [2024-12-07 10:09:57.310835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.612 [2024-12-07 10:09:57.311010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.612 [2024-12-07 10:09:57.311184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.612 [2024-12-07 10:09:57.311192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.612 [2024-12-07 10:09:57.311198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.612 [2024-12-07 10:09:57.313871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.612 [2024-12-07 10:09:57.323444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.612 [2024-12-07 10:09:57.323883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.612 [2024-12-07 10:09:57.323901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.612 [2024-12-07 10:09:57.323908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.612 [2024-12-07 10:09:57.324091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.612 [2024-12-07 10:09:57.324278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.612 [2024-12-07 10:09:57.324285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.612 [2024-12-07 10:09:57.324292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.612 [2024-12-07 10:09:57.326962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.872 [2024-12-07 10:09:57.336441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.872 [2024-12-07 10:09:57.336865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.872 [2024-12-07 10:09:57.336881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.872 [2024-12-07 10:09:57.336889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.872 [2024-12-07 10:09:57.337066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.872 [2024-12-07 10:09:57.337258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.872 [2024-12-07 10:09:57.337270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.872 [2024-12-07 10:09:57.337277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.872 [2024-12-07 10:09:57.340116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.872 [2024-12-07 10:09:57.349319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.872 [2024-12-07 10:09:57.349717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.872 [2024-12-07 10:09:57.349762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.872 [2024-12-07 10:09:57.349785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.872 [2024-12-07 10:09:57.350377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.872 [2024-12-07 10:09:57.350970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.872 [2024-12-07 10:09:57.350995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.872 [2024-12-07 10:09:57.351016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.872 [2024-12-07 10:09:57.353990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.872 [2024-12-07 10:09:57.362267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.872 [2024-12-07 10:09:57.362730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.872 [2024-12-07 10:09:57.362774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.872 [2024-12-07 10:09:57.362798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.872 [2024-12-07 10:09:57.363391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.872 [2024-12-07 10:09:57.363985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.872 [2024-12-07 10:09:57.364011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.872 [2024-12-07 10:09:57.364043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.872 [2024-12-07 10:09:57.366818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.872 [2024-12-07 10:09:57.375148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.872 [2024-12-07 10:09:57.375619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.872 [2024-12-07 10:09:57.375662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.872 [2024-12-07 10:09:57.375685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.872 [2024-12-07 10:09:57.376163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.872 [2024-12-07 10:09:57.376418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.872 [2024-12-07 10:09:57.376429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.872 [2024-12-07 10:09:57.376438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.872 [2024-12-07 10:09:57.380493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.872 [2024-12-07 10:09:57.388335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.872 [2024-12-07 10:09:57.388811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.872 [2024-12-07 10:09:57.388855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.872 [2024-12-07 10:09:57.388877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.872 [2024-12-07 10:09:57.389311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.872 [2024-12-07 10:09:57.389483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.872 [2024-12-07 10:09:57.389491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.873 [2024-12-07 10:09:57.389497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.873 [2024-12-07 10:09:57.392206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.873 [2024-12-07 10:09:57.401231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.873 [2024-12-07 10:09:57.401671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.873 [2024-12-07 10:09:57.401686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.873 [2024-12-07 10:09:57.401693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.873 [2024-12-07 10:09:57.401855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.873 [2024-12-07 10:09:57.402042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.873 [2024-12-07 10:09:57.402050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.873 [2024-12-07 10:09:57.402057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.873 [2024-12-07 10:09:57.404717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.873 [2024-12-07 10:09:57.414075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.873 [2024-12-07 10:09:57.414527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.873 [2024-12-07 10:09:57.414570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.873 [2024-12-07 10:09:57.414592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.873 [2024-12-07 10:09:57.415099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.873 [2024-12-07 10:09:57.415272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.873 [2024-12-07 10:09:57.415279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.873 [2024-12-07 10:09:57.415286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.873 [2024-12-07 10:09:57.417956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.873 [2024-12-07 10:09:57.426893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.873 [2024-12-07 10:09:57.427370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.873 [2024-12-07 10:09:57.427415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.873 [2024-12-07 10:09:57.427437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.873 [2024-12-07 10:09:57.427962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.873 [2024-12-07 10:09:57.428135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.873 [2024-12-07 10:09:57.428142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.873 [2024-12-07 10:09:57.428149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.873 [2024-12-07 10:09:57.430817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.873 [2024-12-07 10:09:57.439757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.873 [2024-12-07 10:09:57.440189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.873 [2024-12-07 10:09:57.440204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.873 [2024-12-07 10:09:57.440211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.873 [2024-12-07 10:09:57.440373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.873 [2024-12-07 10:09:57.440535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.873 [2024-12-07 10:09:57.440543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.873 [2024-12-07 10:09:57.440549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.873 [2024-12-07 10:09:57.443220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.873 [2024-12-07 10:09:57.452541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.873 [2024-12-07 10:09:57.452894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.873 [2024-12-07 10:09:57.452936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.873 [2024-12-07 10:09:57.452973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.873 [2024-12-07 10:09:57.453456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.873 [2024-12-07 10:09:57.453628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.873 [2024-12-07 10:09:57.453635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.873 [2024-12-07 10:09:57.453642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.873 [2024-12-07 10:09:57.456356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.873 [2024-12-07 10:09:57.465444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.873 [2024-12-07 10:09:57.465852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.873 [2024-12-07 10:09:57.465867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.873 [2024-12-07 10:09:57.465874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.873 [2024-12-07 10:09:57.466061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.873 [2024-12-07 10:09:57.466233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.873 [2024-12-07 10:09:57.466241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.873 [2024-12-07 10:09:57.466253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.873 [2024-12-07 10:09:57.468922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.873 [2024-12-07 10:09:57.478332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.873 [2024-12-07 10:09:57.478815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.873 [2024-12-07 10:09:57.478858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.873 [2024-12-07 10:09:57.478880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.873 [2024-12-07 10:09:57.479285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.873 [2024-12-07 10:09:57.479458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.873 [2024-12-07 10:09:57.479465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.873 [2024-12-07 10:09:57.479472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.873 [2024-12-07 10:09:57.482139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.873 [2024-12-07 10:09:57.491146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.873 [2024-12-07 10:09:57.491491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.873 [2024-12-07 10:09:57.491507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.873 [2024-12-07 10:09:57.491514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.873 [2024-12-07 10:09:57.491685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.873 [2024-12-07 10:09:57.491856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.873 [2024-12-07 10:09:57.491864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.873 [2024-12-07 10:09:57.491870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.873 [2024-12-07 10:09:57.494545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.873 [2024-12-07 10:09:57.504039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.873 [2024-12-07 10:09:57.504379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.873 [2024-12-07 10:09:57.504395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.873 [2024-12-07 10:09:57.504401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.873 [2024-12-07 10:09:57.504562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.873 [2024-12-07 10:09:57.504725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.873 [2024-12-07 10:09:57.504732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.873 [2024-12-07 10:09:57.504738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.873 [2024-12-07 10:09:57.507413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.873 [2024-12-07 10:09:57.516883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.873 [2024-12-07 10:09:57.517347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.873 [2024-12-07 10:09:57.517366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.873 [2024-12-07 10:09:57.517373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.873 [2024-12-07 10:09:57.517544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.873 [2024-12-07 10:09:57.517715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.873 [2024-12-07 10:09:57.517723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.874 [2024-12-07 10:09:57.517729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.874 [2024-12-07 10:09:57.520411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.874 [2024-12-07 10:09:57.529733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.874 [2024-12-07 10:09:57.530163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.874 [2024-12-07 10:09:57.530179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.874 [2024-12-07 10:09:57.530186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.874 [2024-12-07 10:09:57.530348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.874 [2024-12-07 10:09:57.530510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.874 [2024-12-07 10:09:57.530517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.874 [2024-12-07 10:09:57.530523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.874 [2024-12-07 10:09:57.533211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.874 [2024-12-07 10:09:57.542573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.874 [2024-12-07 10:09:57.543028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.874 [2024-12-07 10:09:57.543045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.874 [2024-12-07 10:09:57.543052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.874 [2024-12-07 10:09:57.543227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.874 [2024-12-07 10:09:57.543390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.874 [2024-12-07 10:09:57.543397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.874 [2024-12-07 10:09:57.543403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.874 [2024-12-07 10:09:57.546099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.874 [2024-12-07 10:09:57.555567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.874 [2024-12-07 10:09:57.555999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.874 [2024-12-07 10:09:57.556031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.874 [2024-12-07 10:09:57.556054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.874 [2024-12-07 10:09:57.556615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.874 [2024-12-07 10:09:57.556790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.874 [2024-12-07 10:09:57.556797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.874 [2024-12-07 10:09:57.556803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.874 [2024-12-07 10:09:57.559514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.874 [2024-12-07 10:09:57.568390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.874 [2024-12-07 10:09:57.568800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.874 [2024-12-07 10:09:57.568816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.874 [2024-12-07 10:09:57.568822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.874 [2024-12-07 10:09:57.569006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.874 [2024-12-07 10:09:57.569180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.874 [2024-12-07 10:09:57.569188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.874 [2024-12-07 10:09:57.569195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.874 [2024-12-07 10:09:57.571938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.874 [2024-12-07 10:09:57.581418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:28.874 [2024-12-07 10:09:57.581799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.874 [2024-12-07 10:09:57.581816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:28.874 [2024-12-07 10:09:57.581825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:28.874 [2024-12-07 10:09:57.582007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:28.874 [2024-12-07 10:09:57.582186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:28.874 [2024-12-07 10:09:57.582194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:28.874 [2024-12-07 10:09:57.582201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:28.874 [2024-12-07 10:09:57.585026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.874 [2024-12-07 10:09:57.594503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.133 [2024-12-07 10:09:57.594954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.133 [2024-12-07 10:09:57.594971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.133 [2024-12-07 10:09:57.594978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.133 [2024-12-07 10:09:57.595154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.133 [2024-12-07 10:09:57.595331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.133 [2024-12-07 10:09:57.595339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.133 [2024-12-07 10:09:57.595346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.133 [2024-12-07 10:09:57.598099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.133 [2024-12-07 10:09:57.607471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.133 [2024-12-07 10:09:57.607885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.133 [2024-12-07 10:09:57.607902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.133 [2024-12-07 10:09:57.607910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.133 [2024-12-07 10:09:57.608099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.133 [2024-12-07 10:09:57.608272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.133 [2024-12-07 10:09:57.608279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.133 [2024-12-07 10:09:57.608286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.133 [2024-12-07 10:09:57.610955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.133 [2024-12-07 10:09:57.620373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.133 [2024-12-07 10:09:57.620820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.133 [2024-12-07 10:09:57.620863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.133 [2024-12-07 10:09:57.620887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.133 [2024-12-07 10:09:57.621487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.134 [2024-12-07 10:09:57.621659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.134 [2024-12-07 10:09:57.621667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.134 [2024-12-07 10:09:57.621674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.134 [2024-12-07 10:09:57.624343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.134 [2024-12-07 10:09:57.633186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.134 [2024-12-07 10:09:57.633658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.134 [2024-12-07 10:09:57.633701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.134 [2024-12-07 10:09:57.633724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.134 [2024-12-07 10:09:57.634319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.134 [2024-12-07 10:09:57.634776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.134 [2024-12-07 10:09:57.634783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.134 [2024-12-07 10:09:57.634790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.134 [2024-12-07 10:09:57.637413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.134 [2024-12-07 10:09:57.646051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.134 [2024-12-07 10:09:57.646497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.134 [2024-12-07 10:09:57.646542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.134 [2024-12-07 10:09:57.646574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.134 [2024-12-07 10:09:57.647099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.134 [2024-12-07 10:09:57.647272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.134 [2024-12-07 10:09:57.647280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.134 [2024-12-07 10:09:57.647286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.134 [2024-12-07 10:09:57.650024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.134 [2024-12-07 10:09:57.658882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.134 [2024-12-07 10:09:57.659355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.134 [2024-12-07 10:09:57.659399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.134 [2024-12-07 10:09:57.659421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.134 [2024-12-07 10:09:57.659785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.134 [2024-12-07 10:09:57.659961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.134 [2024-12-07 10:09:57.659970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.134 [2024-12-07 10:09:57.659977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.134 [2024-12-07 10:09:57.663778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.134 [2024-12-07 10:09:57.672373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.134 [2024-12-07 10:09:57.672816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.134 [2024-12-07 10:09:57.672859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.134 [2024-12-07 10:09:57.672883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.134 [2024-12-07 10:09:57.673475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.134 [2024-12-07 10:09:57.673927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.134 [2024-12-07 10:09:57.673935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.134 [2024-12-07 10:09:57.673941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.134 [2024-12-07 10:09:57.676711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.134 [2024-12-07 10:09:57.685228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.134 [2024-12-07 10:09:57.685693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.134 [2024-12-07 10:09:57.685709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.134 [2024-12-07 10:09:57.685716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.134 [2024-12-07 10:09:57.685887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.134 [2024-12-07 10:09:57.686065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.134 [2024-12-07 10:09:57.686077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.134 [2024-12-07 10:09:57.686084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.134 [2024-12-07 10:09:57.688752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.134 [2024-12-07 10:09:57.698085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.134 [2024-12-07 10:09:57.698519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.134 [2024-12-07 10:09:57.698534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.134 [2024-12-07 10:09:57.698541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.134 [2024-12-07 10:09:57.698703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.134 [2024-12-07 10:09:57.698865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.134 [2024-12-07 10:09:57.698873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.134 [2024-12-07 10:09:57.698879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.134 [2024-12-07 10:09:57.701567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.134 [2024-12-07 10:09:57.710927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.134 [2024-12-07 10:09:57.711393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.134 [2024-12-07 10:09:57.711410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.134 [2024-12-07 10:09:57.711417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.134 [2024-12-07 10:09:57.711587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.134 [2024-12-07 10:09:57.711759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.134 [2024-12-07 10:09:57.711767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.134 [2024-12-07 10:09:57.711773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.134 [2024-12-07 10:09:57.714454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.134 [2024-12-07 10:09:57.723771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.134 [2024-12-07 10:09:57.724187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.134 [2024-12-07 10:09:57.724203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.134 [2024-12-07 10:09:57.724210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.134 [2024-12-07 10:09:57.724372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.134 [2024-12-07 10:09:57.724534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.134 [2024-12-07 10:09:57.724542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.134 [2024-12-07 10:09:57.724548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.134 [2024-12-07 10:09:57.727211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.134 [2024-12-07 10:09:57.736692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.134 [2024-12-07 10:09:57.737109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.134 [2024-12-07 10:09:57.737155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.134 [2024-12-07 10:09:57.737177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.134 [2024-12-07 10:09:57.737741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.134 [2024-12-07 10:09:57.737903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.134 [2024-12-07 10:09:57.737910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.134 [2024-12-07 10:09:57.737917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.134 [2024-12-07 10:09:57.740603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.134 [2024-12-07 10:09:57.749533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.134 [2024-12-07 10:09:57.750002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.134 [2024-12-07 10:09:57.750048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.134 [2024-12-07 10:09:57.750072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.134 [2024-12-07 10:09:57.750650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.135 [2024-12-07 10:09:57.751086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.135 [2024-12-07 10:09:57.751094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.135 [2024-12-07 10:09:57.751101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.135 [2024-12-07 10:09:57.754906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.135 [2024-12-07 10:09:57.763147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.135 [2024-12-07 10:09:57.763582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.135 [2024-12-07 10:09:57.763599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.135 [2024-12-07 10:09:57.763606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.135 [2024-12-07 10:09:57.763777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.135 [2024-12-07 10:09:57.763955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.135 [2024-12-07 10:09:57.763964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.135 [2024-12-07 10:09:57.763970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.135 [2024-12-07 10:09:57.766678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.135 [2024-12-07 10:09:57.775944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.135 [2024-12-07 10:09:57.776376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.135 [2024-12-07 10:09:57.776391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.135 [2024-12-07 10:09:57.776398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.135 [2024-12-07 10:09:57.776563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.135 [2024-12-07 10:09:57.776726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.135 [2024-12-07 10:09:57.776733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.135 [2024-12-07 10:09:57.776739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.135 [2024-12-07 10:09:57.779428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.135 [2024-12-07 10:09:57.788847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.135 [2024-12-07 10:09:57.789227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.135 [2024-12-07 10:09:57.789243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.135 [2024-12-07 10:09:57.789251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.135 [2024-12-07 10:09:57.789422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.135 [2024-12-07 10:09:57.789593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.135 [2024-12-07 10:09:57.789600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.135 [2024-12-07 10:09:57.789607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.135 [2024-12-07 10:09:57.792299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.135 [2024-12-07 10:09:57.801631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.135 [2024-12-07 10:09:57.801982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.135 [2024-12-07 10:09:57.801998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.135 [2024-12-07 10:09:57.802005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.135 [2024-12-07 10:09:57.802167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.135 [2024-12-07 10:09:57.802329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.135 [2024-12-07 10:09:57.802336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.135 [2024-12-07 10:09:57.802343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.135 [2024-12-07 10:09:57.805036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.135 [2024-12-07 10:09:57.814561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.135 [2024-12-07 10:09:57.815019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.135 [2024-12-07 10:09:57.815035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.135 [2024-12-07 10:09:57.815042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.135 [2024-12-07 10:09:57.815213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.135 [2024-12-07 10:09:57.815385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.135 [2024-12-07 10:09:57.815393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.135 [2024-12-07 10:09:57.815403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.135 [2024-12-07 10:09:57.818158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.135 [2024-12-07 10:09:57.827469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.135 [2024-12-07 10:09:57.827927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.135 [2024-12-07 10:09:57.827986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.135 [2024-12-07 10:09:57.828010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.135 [2024-12-07 10:09:57.828588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.135 [2024-12-07 10:09:57.829133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.135 [2024-12-07 10:09:57.829142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.135 [2024-12-07 10:09:57.829150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.135 [2024-12-07 10:09:57.832097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.135 [2024-12-07 10:09:57.840466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.135 [2024-12-07 10:09:57.840892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.135 [2024-12-07 10:09:57.840910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.135 [2024-12-07 10:09:57.840918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.135 [2024-12-07 10:09:57.841101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.135 [2024-12-07 10:09:57.841305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.135 [2024-12-07 10:09:57.841313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.135 [2024-12-07 10:09:57.841321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.135 [2024-12-07 10:09:57.844146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.135 [2024-12-07 10:09:57.853609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.135 [2024-12-07 10:09:57.854007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.135 [2024-12-07 10:09:57.854053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.135 [2024-12-07 10:09:57.854077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.135 [2024-12-07 10:09:57.854655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.135 [2024-12-07 10:09:57.855217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.135 [2024-12-07 10:09:57.855225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.135 [2024-12-07 10:09:57.855231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.395 [2024-12-07 10:09:57.858051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.395 [2024-12-07 10:09:57.866502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.395 [2024-12-07 10:09:57.866970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.395 [2024-12-07 10:09:57.867014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.395 [2024-12-07 10:09:57.867037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.395 [2024-12-07 10:09:57.867616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.395 [2024-12-07 10:09:57.868218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.395 [2024-12-07 10:09:57.868227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.395 [2024-12-07 10:09:57.868233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.395 [2024-12-07 10:09:57.870888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.395 [2024-12-07 10:09:57.879496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.395 [2024-12-07 10:09:57.879959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.395 [2024-12-07 10:09:57.879976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.395 [2024-12-07 10:09:57.879983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.395 [2024-12-07 10:09:57.880155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.395 [2024-12-07 10:09:57.880327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.395 [2024-12-07 10:09:57.880334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.395 [2024-12-07 10:09:57.880341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.395 [2024-12-07 10:09:57.883092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.395 [2024-12-07 10:09:57.892364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.395 [2024-12-07 10:09:57.892805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.395 [2024-12-07 10:09:57.892850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.395 [2024-12-07 10:09:57.892873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.395 [2024-12-07 10:09:57.893408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.395 [2024-12-07 10:09:57.893581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.395 [2024-12-07 10:09:57.893589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.395 [2024-12-07 10:09:57.893595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.395 [2024-12-07 10:09:57.896269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.395 [2024-12-07 10:09:57.905200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.395 [2024-12-07 10:09:57.905653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.395 [2024-12-07 10:09:57.905694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.395 [2024-12-07 10:09:57.905719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.395 [2024-12-07 10:09:57.906320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.395 [2024-12-07 10:09:57.906829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.395 [2024-12-07 10:09:57.906836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.395 [2024-12-07 10:09:57.906843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.395 [2024-12-07 10:09:57.909514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.395 [2024-12-07 10:09:57.918049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.395 [2024-12-07 10:09:57.918492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.395 [2024-12-07 10:09:57.918536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.395 [2024-12-07 10:09:57.918558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.395 [2024-12-07 10:09:57.918926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.395 [2024-12-07 10:09:57.919116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.395 [2024-12-07 10:09:57.919124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.395 [2024-12-07 10:09:57.919130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.395 [2024-12-07 10:09:57.921802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.395 [2024-12-07 10:09:57.930835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.395 [2024-12-07 10:09:57.931273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.395 [2024-12-07 10:09:57.931317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.395 [2024-12-07 10:09:57.931339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.395 [2024-12-07 10:09:57.931916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.395 [2024-12-07 10:09:57.932416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.395 [2024-12-07 10:09:57.932424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.395 [2024-12-07 10:09:57.932431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.395 [2024-12-07 10:09:57.935126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.395 [2024-12-07 10:09:57.943749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.395 [2024-12-07 10:09:57.944186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.395 [2024-12-07 10:09:57.944231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.395 [2024-12-07 10:09:57.944254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.395 [2024-12-07 10:09:57.944832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.395 [2024-12-07 10:09:57.945439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.395 [2024-12-07 10:09:57.945466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.395 [2024-12-07 10:09:57.945499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.395 [2024-12-07 10:09:57.948142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.395 [2024-12-07 10:09:57.956656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.395 [2024-12-07 10:09:57.957068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.395 [2024-12-07 10:09:57.957116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.395 [2024-12-07 10:09:57.957139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.395 [2024-12-07 10:09:57.957675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.395 [2024-12-07 10:09:57.957837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.395 [2024-12-07 10:09:57.957844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.395 [2024-12-07 10:09:57.957851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.395 [2024-12-07 10:09:57.960542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.395 [2024-12-07 10:09:57.969591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.395 [2024-12-07 10:09:57.970058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.395 [2024-12-07 10:09:57.970101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.395 [2024-12-07 10:09:57.970124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.395 [2024-12-07 10:09:57.970702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.395 [2024-12-07 10:09:57.971031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.396 [2024-12-07 10:09:57.971044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.396 [2024-12-07 10:09:57.971051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.396 [2024-12-07 10:09:57.973729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.396 [2024-12-07 10:09:57.982464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.396 [2024-12-07 10:09:57.982919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.396 [2024-12-07 10:09:57.982935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.396 [2024-12-07 10:09:57.982943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.396 [2024-12-07 10:09:57.983119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.396 [2024-12-07 10:09:57.983291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.396 [2024-12-07 10:09:57.983299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.396 [2024-12-07 10:09:57.983305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.396 [2024-12-07 10:09:57.985966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.396 [2024-12-07 10:09:57.995283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.396 [2024-12-07 10:09:57.995633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.396 [2024-12-07 10:09:57.995652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.396 [2024-12-07 10:09:57.995659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.396 [2024-12-07 10:09:57.995820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.396 [2024-12-07 10:09:57.996005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.396 [2024-12-07 10:09:57.996013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.396 [2024-12-07 10:09:57.996020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.396 [2024-12-07 10:09:57.998688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.396 [2024-12-07 10:09:58.008173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.396 [2024-12-07 10:09:58.008527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.396 [2024-12-07 10:09:58.008542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.396 [2024-12-07 10:09:58.008548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.396 [2024-12-07 10:09:58.008710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.396 [2024-12-07 10:09:58.008872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.396 [2024-12-07 10:09:58.008880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.396 [2024-12-07 10:09:58.008886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.396 [2024-12-07 10:09:58.011577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.396 [2024-12-07 10:09:58.021102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.396 [2024-12-07 10:09:58.021450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.396 [2024-12-07 10:09:58.021465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.396 [2024-12-07 10:09:58.021472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.396 [2024-12-07 10:09:58.021643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.396 [2024-12-07 10:09:58.021815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.396 [2024-12-07 10:09:58.021823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.396 [2024-12-07 10:09:58.021829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.396 [2024-12-07 10:09:58.024516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.396 [2024-12-07 10:09:58.034007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.396 [2024-12-07 10:09:58.034445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.396 [2024-12-07 10:09:58.034461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.396 [2024-12-07 10:09:58.034467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.396 [2024-12-07 10:09:58.034629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.396 [2024-12-07 10:09:58.034795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.396 [2024-12-07 10:09:58.034803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.396 [2024-12-07 10:09:58.034809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.396 [2024-12-07 10:09:58.037502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.396 [2024-12-07 10:09:58.046884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.396 [2024-12-07 10:09:58.047247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.396 [2024-12-07 10:09:58.047264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.396 [2024-12-07 10:09:58.047271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.396 [2024-12-07 10:09:58.047442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.396 [2024-12-07 10:09:58.047614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.396 [2024-12-07 10:09:58.047622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.396 [2024-12-07 10:09:58.047628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.396 [2024-12-07 10:09:58.050322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.396 [2024-12-07 10:09:58.059804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.396 [2024-12-07 10:09:58.060253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.396 [2024-12-07 10:09:58.060298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.396 [2024-12-07 10:09:58.060320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.396 [2024-12-07 10:09:58.060782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.396 [2024-12-07 10:09:58.060945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.396 [2024-12-07 10:09:58.060958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.396 [2024-12-07 10:09:58.060965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.396 [2024-12-07 10:09:58.063716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.396 [2024-12-07 10:09:58.072608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.396 [2024-12-07 10:09:58.072962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.396 [2024-12-07 10:09:58.072978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.396 [2024-12-07 10:09:58.072985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.396 [2024-12-07 10:09:58.073165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.396 [2024-12-07 10:09:58.073327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.396 [2024-12-07 10:09:58.073334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.396 [2024-12-07 10:09:58.073340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.396 [2024-12-07 10:09:58.076101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.396 [2024-12-07 10:09:58.085460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.396 [2024-12-07 10:09:58.085924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.396 [2024-12-07 10:09:58.085980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.396 [2024-12-07 10:09:58.086004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.396 [2024-12-07 10:09:58.086508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.396 [2024-12-07 10:09:58.086680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.396 [2024-12-07 10:09:58.086687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.396 [2024-12-07 10:09:58.086694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.396 [2024-12-07 10:09:58.089474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.396 [2024-12-07 10:09:58.098608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.396 [2024-12-07 10:09:58.098991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.396 [2024-12-07 10:09:58.099009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.396 [2024-12-07 10:09:58.099017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.397 [2024-12-07 10:09:58.099206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.397 [2024-12-07 10:09:58.099379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.397 [2024-12-07 10:09:58.099388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.397 [2024-12-07 10:09:58.099395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.397 [2024-12-07 10:09:58.102141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.397 [2024-12-07 10:09:58.111544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.397 [2024-12-07 10:09:58.111974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.397 [2024-12-07 10:09:58.112020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.397 [2024-12-07 10:09:58.112043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.397 [2024-12-07 10:09:58.112524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.397 [2024-12-07 10:09:58.112686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.397 [2024-12-07 10:09:58.112694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.397 [2024-12-07 10:09:58.112699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.397 [2024-12-07 10:09:58.115542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.656 [2024-12-07 10:09:58.124651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.656 [2024-12-07 10:09:58.125082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.656 [2024-12-07 10:09:58.125098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.656 [2024-12-07 10:09:58.125110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.656 [2024-12-07 10:09:58.125272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.656 [2024-12-07 10:09:58.125435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.656 [2024-12-07 10:09:58.125442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.656 [2024-12-07 10:09:58.125448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.656 [2024-12-07 10:09:58.128210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.656 [2024-12-07 10:09:58.137546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.656 [2024-12-07 10:09:58.137957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.656 [2024-12-07 10:09:58.138001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.656 [2024-12-07 10:09:58.138024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.656 [2024-12-07 10:09:58.138603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.656 [2024-12-07 10:09:58.138998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.656 [2024-12-07 10:09:58.139006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.656 [2024-12-07 10:09:58.139013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.656 [2024-12-07 10:09:58.141734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.656 [2024-12-07 10:09:58.150407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.656 [2024-12-07 10:09:58.150842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.656 [2024-12-07 10:09:58.150858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.656 [2024-12-07 10:09:58.150865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.656 [2024-12-07 10:09:58.151051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.656 [2024-12-07 10:09:58.151224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.656 [2024-12-07 10:09:58.151232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.656 [2024-12-07 10:09:58.151238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.656 [2024-12-07 10:09:58.153967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.656 [2024-12-07 10:09:58.163295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.656 [2024-12-07 10:09:58.163713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.656 [2024-12-07 10:09:58.163757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.656 [2024-12-07 10:09:58.163780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.656 [2024-12-07 10:09:58.164374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.656 [2024-12-07 10:09:58.164945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.656 [2024-12-07 10:09:58.164960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.656 [2024-12-07 10:09:58.164967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.656 [2024-12-07 10:09:58.167580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.656 [2024-12-07 10:09:58.176080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.656 [2024-12-07 10:09:58.176488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.656 [2024-12-07 10:09:58.176503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.656 [2024-12-07 10:09:58.176510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.656 [2024-12-07 10:09:58.176672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.656 [2024-12-07 10:09:58.176834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.656 [2024-12-07 10:09:58.176842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.656 [2024-12-07 10:09:58.176848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.656 7089.75 IOPS, 27.69 MiB/s [2024-12-07T09:09:58.382Z] [2024-12-07 10:09:58.180713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.656 [2024-12-07 10:09:58.188906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.656 [2024-12-07 10:09:58.189315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.656 [2024-12-07 10:09:58.189331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.656 [2024-12-07 10:09:58.189339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.656 [2024-12-07 10:09:58.189510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.656 [2024-12-07 10:09:58.189682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.656 [2024-12-07 10:09:58.189689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.656 [2024-12-07 10:09:58.189696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.656 [2024-12-07 10:09:58.192371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.656 [2024-12-07 10:09:58.201825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.656 [2024-12-07 10:09:58.202282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.656 [2024-12-07 10:09:58.202299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.656 [2024-12-07 10:09:58.202306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.656 [2024-12-07 10:09:58.202478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.656 [2024-12-07 10:09:58.202650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.656 [2024-12-07 10:09:58.202658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.656 [2024-12-07 10:09:58.202664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.656 [2024-12-07 10:09:58.205346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.656 [2024-12-07 10:09:58.214716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.656 [2024-12-07 10:09:58.215147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.656 [2024-12-07 10:09:58.215193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.656 [2024-12-07 10:09:58.215216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.656 [2024-12-07 10:09:58.215796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.656 [2024-12-07 10:09:58.216220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.656 [2024-12-07 10:09:58.216229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.656 [2024-12-07 10:09:58.216235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.656 [2024-12-07 10:09:58.220270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.656 [2024-12-07 10:09:58.228301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.656 [2024-12-07 10:09:58.228723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.656 [2024-12-07 10:09:58.228758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.656 [2024-12-07 10:09:58.228784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.656 [2024-12-07 10:09:58.229329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.656 [2024-12-07 10:09:58.229501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.656 [2024-12-07 10:09:58.229509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.656 [2024-12-07 10:09:58.229516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.656 [2024-12-07 10:09:58.232251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.656 [2024-12-07 10:09:58.241305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.656 [2024-12-07 10:09:58.241730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.657 [2024-12-07 10:09:58.241746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.657 [2024-12-07 10:09:58.241753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.657 [2024-12-07 10:09:58.241925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.657 [2024-12-07 10:09:58.242105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.657 [2024-12-07 10:09:58.242113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.657 [2024-12-07 10:09:58.242120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.657 [2024-12-07 10:09:58.244790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.657 [2024-12-07 10:09:58.254268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.657 [2024-12-07 10:09:58.254708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.657 [2024-12-07 10:09:58.254752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.657 [2024-12-07 10:09:58.254783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.657 [2024-12-07 10:09:58.255283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.657 [2024-12-07 10:09:58.255456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.657 [2024-12-07 10:09:58.255464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.657 [2024-12-07 10:09:58.255470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.657 [2024-12-07 10:09:58.258166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.657 [2024-12-07 10:09:58.267178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.657 [2024-12-07 10:09:58.267619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.657 [2024-12-07 10:09:58.267662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.657 [2024-12-07 10:09:58.267685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.657 [2024-12-07 10:09:58.268182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.657 [2024-12-07 10:09:58.268355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.657 [2024-12-07 10:09:58.268362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.657 [2024-12-07 10:09:58.268369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.657 [2024-12-07 10:09:58.271043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.657 [2024-12-07 10:09:58.280058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.657 [2024-12-07 10:09:58.280493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.657 [2024-12-07 10:09:58.280526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.657 [2024-12-07 10:09:58.280549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.657 [2024-12-07 10:09:58.281114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.657 [2024-12-07 10:09:58.281288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.657 [2024-12-07 10:09:58.281296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.657 [2024-12-07 10:09:58.281303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.657 [2024-12-07 10:09:58.283976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.657 [2024-12-07 10:09:58.292893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.657 [2024-12-07 10:09:58.293286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.657 [2024-12-07 10:09:58.293303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.657 [2024-12-07 10:09:58.293310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.657 [2024-12-07 10:09:58.293482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.657 [2024-12-07 10:09:58.293654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.657 [2024-12-07 10:09:58.293662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.657 [2024-12-07 10:09:58.293672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.657 [2024-12-07 10:09:58.296369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.657 [2024-12-07 10:09:58.305772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.657 [2024-12-07 10:09:58.306137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.657 [2024-12-07 10:09:58.306153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.657 [2024-12-07 10:09:58.306160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.657 [2024-12-07 10:09:58.306321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.657 [2024-12-07 10:09:58.306484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.657 [2024-12-07 10:09:58.306491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.657 [2024-12-07 10:09:58.306498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.657 [2024-12-07 10:09:58.309194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.657 [2024-12-07 10:09:58.318734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.657 [2024-12-07 10:09:58.319130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.657 [2024-12-07 10:09:58.319147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.657 [2024-12-07 10:09:58.319155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.657 [2024-12-07 10:09:58.319327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.657 [2024-12-07 10:09:58.319498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.657 [2024-12-07 10:09:58.319506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.657 [2024-12-07 10:09:58.319513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.657 [2024-12-07 10:09:58.322216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.657 [2024-12-07 10:09:58.331711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.657 [2024-12-07 10:09:58.332100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.657 [2024-12-07 10:09:58.332117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.657 [2024-12-07 10:09:58.332124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.657 [2024-12-07 10:09:58.332296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.657 [2024-12-07 10:09:58.332469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.657 [2024-12-07 10:09:58.332476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.657 [2024-12-07 10:09:58.332483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.657 [2024-12-07 10:09:58.335162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.657 [2024-12-07 10:09:58.344659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.657 [2024-12-07 10:09:58.345087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.657 [2024-12-07 10:09:58.345104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.657 [2024-12-07 10:09:58.345112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.657 [2024-12-07 10:09:58.345288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.657 [2024-12-07 10:09:58.345467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.657 [2024-12-07 10:09:58.345475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.657 [2024-12-07 10:09:58.345482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.657 [2024-12-07 10:09:58.348324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.657 [2024-12-07 10:09:58.357865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.657 [2024-12-07 10:09:58.358296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.657 [2024-12-07 10:09:58.358329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.657 [2024-12-07 10:09:58.358352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.657 [2024-12-07 10:09:58.358932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.657 [2024-12-07 10:09:58.359463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.657 [2024-12-07 10:09:58.359473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.657 [2024-12-07 10:09:58.359479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.657 [2024-12-07 10:09:58.362314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.657 [2024-12-07 10:09:58.370994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.657 [2024-12-07 10:09:58.371376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.658 [2024-12-07 10:09:58.371393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.658 [2024-12-07 10:09:58.371401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.658 [2024-12-07 10:09:58.371577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.658 [2024-12-07 10:09:58.371754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.658 [2024-12-07 10:09:58.371762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.658 [2024-12-07 10:09:58.371768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.658 [2024-12-07 10:09:58.374621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.917 [2024-12-07 10:09:58.384161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.917 [2024-12-07 10:09:58.384635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.917 [2024-12-07 10:09:58.384651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.917 [2024-12-07 10:09:58.384659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.917 [2024-12-07 10:09:58.384838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.917 [2024-12-07 10:09:58.385023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.917 [2024-12-07 10:09:58.385032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.917 [2024-12-07 10:09:58.385039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.917 [2024-12-07 10:09:58.387786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.917 [2024-12-07 10:09:58.397138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.917 [2024-12-07 10:09:58.397562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.917 [2024-12-07 10:09:58.397607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.917 [2024-12-07 10:09:58.397630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.917 [2024-12-07 10:09:58.398223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.917 [2024-12-07 10:09:58.398652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.917 [2024-12-07 10:09:58.398660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.917 [2024-12-07 10:09:58.398666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.917 [2024-12-07 10:09:58.402457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.917 [2024-12-07 10:09:58.410836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.917 [2024-12-07 10:09:58.411309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.917 [2024-12-07 10:09:58.411354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.917 [2024-12-07 10:09:58.411377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.917 [2024-12-07 10:09:58.411776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.917 [2024-12-07 10:09:58.411955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.917 [2024-12-07 10:09:58.411963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.917 [2024-12-07 10:09:58.411970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.917 [2024-12-07 10:09:58.414722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.917 [2024-12-07 10:09:58.423800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.917 [2024-12-07 10:09:58.424260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.917 [2024-12-07 10:09:58.424276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.917 [2024-12-07 10:09:58.424283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.917 [2024-12-07 10:09:58.424454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.917 [2024-12-07 10:09:58.424627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.917 [2024-12-07 10:09:58.424635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.917 [2024-12-07 10:09:58.424645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.917 [2024-12-07 10:09:58.427423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.917 [2024-12-07 10:09:58.436761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.917 [2024-12-07 10:09:58.437132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.917 [2024-12-07 10:09:58.437175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.917 [2024-12-07 10:09:58.437198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.917 [2024-12-07 10:09:58.437706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.917 [2024-12-07 10:09:58.437878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.917 [2024-12-07 10:09:58.437885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.917 [2024-12-07 10:09:58.437892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.917 [2024-12-07 10:09:58.440628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.917 [2024-12-07 10:09:58.449665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.917 [2024-12-07 10:09:58.450077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.917 [2024-12-07 10:09:58.450093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.917 [2024-12-07 10:09:58.450099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.917 [2024-12-07 10:09:58.450262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.917 [2024-12-07 10:09:58.450424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.917 [2024-12-07 10:09:58.450431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.917 [2024-12-07 10:09:58.450437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.917 [2024-12-07 10:09:58.453192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.917 [2024-12-07 10:09:58.462567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.917 [2024-12-07 10:09:58.462934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.917 [2024-12-07 10:09:58.462956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.917 [2024-12-07 10:09:58.462964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.917 [2024-12-07 10:09:58.463165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.917 [2024-12-07 10:09:58.463337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.917 [2024-12-07 10:09:58.463345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.917 [2024-12-07 10:09:58.463351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.917 [2024-12-07 10:09:58.466032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.917 [2024-12-07 10:09:58.475452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.917 [2024-12-07 10:09:58.475913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.917 [2024-12-07 10:09:58.475933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.917 [2024-12-07 10:09:58.475940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.917 [2024-12-07 10:09:58.476123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.917 [2024-12-07 10:09:58.476308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.917 [2024-12-07 10:09:58.476316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.917 [2024-12-07 10:09:58.476322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.917 [2024-12-07 10:09:58.479021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.917 [2024-12-07 10:09:58.488344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.917 [2024-12-07 10:09:58.488799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.917 [2024-12-07 10:09:58.488841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.917 [2024-12-07 10:09:58.488863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.917 [2024-12-07 10:09:58.489458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.917 [2024-12-07 10:09:58.489929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.917 [2024-12-07 10:09:58.489940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.917 [2024-12-07 10:09:58.489955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.917 [2024-12-07 10:09:58.494023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.918 [2024-12-07 10:09:58.501767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.918 [2024-12-07 10:09:58.502149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.918 [2024-12-07 10:09:58.502165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.918 [2024-12-07 10:09:58.502173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.918 [2024-12-07 10:09:58.502344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.918 [2024-12-07 10:09:58.502516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.918 [2024-12-07 10:09:58.502524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.918 [2024-12-07 10:09:58.502531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.918 [2024-12-07 10:09:58.505253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.918 [2024-12-07 10:09:58.514729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.918 [2024-12-07 10:09:58.515194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.918 [2024-12-07 10:09:58.515210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.918 [2024-12-07 10:09:58.515217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.918 [2024-12-07 10:09:58.515388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.918 [2024-12-07 10:09:58.515563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.918 [2024-12-07 10:09:58.515571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.918 [2024-12-07 10:09:58.515578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.918 [2024-12-07 10:09:58.518292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.918 [2024-12-07 10:09:58.527602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.918 [2024-12-07 10:09:58.528036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.918 [2024-12-07 10:09:58.528054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.918 [2024-12-07 10:09:58.528061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.918 [2024-12-07 10:09:58.528232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.918 [2024-12-07 10:09:58.528405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.918 [2024-12-07 10:09:58.528412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.918 [2024-12-07 10:09:58.528418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.918 [2024-12-07 10:09:58.531122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.918 [2024-12-07 10:09:58.540441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.918 [2024-12-07 10:09:58.540783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.918 [2024-12-07 10:09:58.540799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.918 [2024-12-07 10:09:58.540806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.918 [2024-12-07 10:09:58.540984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.918 [2024-12-07 10:09:58.541158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.918 [2024-12-07 10:09:58.541166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.918 [2024-12-07 10:09:58.541172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.918 [2024-12-07 10:09:58.543847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.918 [2024-12-07 10:09:58.553464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.918 [2024-12-07 10:09:58.553881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.918 [2024-12-07 10:09:58.553897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.918 [2024-12-07 10:09:58.553904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.918 [2024-12-07 10:09:58.554100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.918 [2024-12-07 10:09:58.554278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.918 [2024-12-07 10:09:58.554286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.918 [2024-12-07 10:09:58.554293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.918 [2024-12-07 10:09:58.557086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.918 [2024-12-07 10:09:58.566429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.918 [2024-12-07 10:09:58.566880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.918 [2024-12-07 10:09:58.566922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.918 [2024-12-07 10:09:58.566945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.918 [2024-12-07 10:09:58.567540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.918 [2024-12-07 10:09:58.568086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.918 [2024-12-07 10:09:58.568094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.918 [2024-12-07 10:09:58.568101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.918 [2024-12-07 10:09:58.570837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.918 [2024-12-07 10:09:58.579406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.918 [2024-12-07 10:09:58.579837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.918 [2024-12-07 10:09:58.579852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.918 [2024-12-07 10:09:58.579860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.918 [2024-12-07 10:09:58.580036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.918 [2024-12-07 10:09:58.580210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.918 [2024-12-07 10:09:58.580217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.918 [2024-12-07 10:09:58.580224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.918 [2024-12-07 10:09:58.582972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.918 [2024-12-07 10:09:58.592330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.918 [2024-12-07 10:09:58.592781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.918 [2024-12-07 10:09:58.592797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.918 [2024-12-07 10:09:58.592804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.918 [2024-12-07 10:09:58.592979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.918 [2024-12-07 10:09:58.593153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.918 [2024-12-07 10:09:58.593160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.918 [2024-12-07 10:09:58.593166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.918 [2024-12-07 10:09:58.595887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.918 [2024-12-07 10:09:58.605608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.918 [2024-12-07 10:09:58.606092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.918 [2024-12-07 10:09:58.606120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.918 [2024-12-07 10:09:58.606132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.918 [2024-12-07 10:09:58.606323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.918 [2024-12-07 10:09:58.606502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.918 [2024-12-07 10:09:58.606510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.918 [2024-12-07 10:09:58.606517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.918 [2024-12-07 10:09:58.609389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.918 [2024-12-07 10:09:58.618662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.918 [2024-12-07 10:09:58.619140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.918 [2024-12-07 10:09:58.619186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.918 [2024-12-07 10:09:58.619208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.918 [2024-12-07 10:09:58.619786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.918 [2024-12-07 10:09:58.620074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.918 [2024-12-07 10:09:58.620083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.918 [2024-12-07 10:09:58.620089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.918 [2024-12-07 10:09:58.622825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.919 [2024-12-07 10:09:58.631636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:29.919 [2024-12-07 10:09:58.632085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.919 [2024-12-07 10:09:58.632102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:29.919 [2024-12-07 10:09:58.632110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:29.919 [2024-12-07 10:09:58.632281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:29.919 [2024-12-07 10:09:58.632454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:29.919 [2024-12-07 10:09:58.632461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:29.919 [2024-12-07 10:09:58.632468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:29.919 [2024-12-07 10:09:58.635166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.178 [2024-12-07 10:09:58.644595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.178 [2024-12-07 10:09:58.645046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.178 [2024-12-07 10:09:58.645063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.178 [2024-12-07 10:09:58.645070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.178 [2024-12-07 10:09:58.645247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.178 [2024-12-07 10:09:58.645424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.178 [2024-12-07 10:09:58.645435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.178 [2024-12-07 10:09:58.645442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.178 [2024-12-07 10:09:58.648181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.178 [2024-12-07 10:09:58.657579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.178 [2024-12-07 10:09:58.657976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.178 [2024-12-07 10:09:58.657994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.178 [2024-12-07 10:09:58.658001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.178 [2024-12-07 10:09:58.658172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.178 [2024-12-07 10:09:58.658344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.178 [2024-12-07 10:09:58.658352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.178 [2024-12-07 10:09:58.658358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.178 [2024-12-07 10:09:58.661069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.178 [2024-12-07 10:09:58.670558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.178 [2024-12-07 10:09:58.671021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.178 [2024-12-07 10:09:58.671065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.178 [2024-12-07 10:09:58.671089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.178 [2024-12-07 10:09:58.671472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.178 [2024-12-07 10:09:58.671644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.178 [2024-12-07 10:09:58.671651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.179 [2024-12-07 10:09:58.671658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.179 [2024-12-07 10:09:58.675526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.179 [2024-12-07 10:09:58.684166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.179 [2024-12-07 10:09:58.684482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.179 [2024-12-07 10:09:58.684497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.179 [2024-12-07 10:09:58.684505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.179 [2024-12-07 10:09:58.684676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.179 [2024-12-07 10:09:58.684848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.179 [2024-12-07 10:09:58.684855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.179 [2024-12-07 10:09:58.684862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.179 [2024-12-07 10:09:58.687582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.179 [2024-12-07 10:09:58.697200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.179 [2024-12-07 10:09:58.697519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.179 [2024-12-07 10:09:58.697535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.179 [2024-12-07 10:09:58.697542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.179 [2024-12-07 10:09:58.697714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.179 [2024-12-07 10:09:58.697886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.179 [2024-12-07 10:09:58.697893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.179 [2024-12-07 10:09:58.697900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.179 [2024-12-07 10:09:58.700637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.179 [2024-12-07 10:09:58.710155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.179 [2024-12-07 10:09:58.710544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.179 [2024-12-07 10:09:58.710561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.179 [2024-12-07 10:09:58.710568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.179 [2024-12-07 10:09:58.710740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.179 [2024-12-07 10:09:58.710911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.179 [2024-12-07 10:09:58.710919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.179 [2024-12-07 10:09:58.710925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.179 [2024-12-07 10:09:58.713610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.179 [2024-12-07 10:09:58.723004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.179 [2024-12-07 10:09:58.723350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.179 [2024-12-07 10:09:58.723366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.179 [2024-12-07 10:09:58.723373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.179 [2024-12-07 10:09:58.723544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.179 [2024-12-07 10:09:58.723716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.179 [2024-12-07 10:09:58.723724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.179 [2024-12-07 10:09:58.723731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.179 [2024-12-07 10:09:58.726407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.179 [2024-12-07 10:09:58.735887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.179 [2024-12-07 10:09:58.736264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.179 [2024-12-07 10:09:58.736281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.179 [2024-12-07 10:09:58.736288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.179 [2024-12-07 10:09:58.736462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.179 [2024-12-07 10:09:58.736634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.179 [2024-12-07 10:09:58.736642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.179 [2024-12-07 10:09:58.736649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.179 [2024-12-07 10:09:58.739360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.179 [2024-12-07 10:09:58.748857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.179 [2024-12-07 10:09:58.749163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.179 [2024-12-07 10:09:58.749179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.179 [2024-12-07 10:09:58.749187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.179 [2024-12-07 10:09:58.749358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.179 [2024-12-07 10:09:58.749530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.179 [2024-12-07 10:09:58.749537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.179 [2024-12-07 10:09:58.749544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.179 [2024-12-07 10:09:58.752273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.179 [2024-12-07 10:09:58.761818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.179 [2024-12-07 10:09:58.762233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.179 [2024-12-07 10:09:58.762277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.179 [2024-12-07 10:09:58.762300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.179 [2024-12-07 10:09:58.762878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.179 [2024-12-07 10:09:58.763175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.179 [2024-12-07 10:09:58.763187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.179 [2024-12-07 10:09:58.763197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.179 [2024-12-07 10:09:58.767253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.179 [2024-12-07 10:09:58.775322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.179 [2024-12-07 10:09:58.775765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.179 [2024-12-07 10:09:58.775780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.179 [2024-12-07 10:09:58.775788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.179 [2024-12-07 10:09:58.775963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.179 [2024-12-07 10:09:58.776137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.179 [2024-12-07 10:09:58.776144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.179 [2024-12-07 10:09:58.776157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.179 [2024-12-07 10:09:58.778900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.179 [2024-12-07 10:09:58.788206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.179 [2024-12-07 10:09:58.788611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.179 [2024-12-07 10:09:58.788627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.179 [2024-12-07 10:09:58.788634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.179 [2024-12-07 10:09:58.788796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.179 [2024-12-07 10:09:58.788964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.179 [2024-12-07 10:09:58.788972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.179 [2024-12-07 10:09:58.788994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.179 [2024-12-07 10:09:58.791662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.179 [2024-12-07 10:09:58.800994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.179 [2024-12-07 10:09:58.801429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.179 [2024-12-07 10:09:58.801465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.179 [2024-12-07 10:09:58.801489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.179 [2024-12-07 10:09:58.802082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.179 [2024-12-07 10:09:58.802569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.180 [2024-12-07 10:09:58.802576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.180 [2024-12-07 10:09:58.802583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.180 [2024-12-07 10:09:58.805254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.180 [2024-12-07 10:09:58.813802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.180 [2024-12-07 10:09:58.814216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.180 [2024-12-07 10:09:58.814253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.180 [2024-12-07 10:09:58.814260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.180 [2024-12-07 10:09:58.814422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.180 [2024-12-07 10:09:58.814584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.180 [2024-12-07 10:09:58.814592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.180 [2024-12-07 10:09:58.814598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.180 [2024-12-07 10:09:58.817286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.180 [2024-12-07 10:09:58.826710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.180 [2024-12-07 10:09:58.827044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.180 [2024-12-07 10:09:58.827095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.180 [2024-12-07 10:09:58.827119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.180 [2024-12-07 10:09:58.827699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.180 [2024-12-07 10:09:58.827891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.180 [2024-12-07 10:09:58.827898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.180 [2024-12-07 10:09:58.827905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.180 [2024-12-07 10:09:58.830646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.180 [2024-12-07 10:09:58.839540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.180 [2024-12-07 10:09:58.839993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.180 [2024-12-07 10:09:58.840038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.180 [2024-12-07 10:09:58.840060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.180 [2024-12-07 10:09:58.840545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.180 [2024-12-07 10:09:58.840717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.180 [2024-12-07 10:09:58.840725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.180 [2024-12-07 10:09:58.840731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.180 [2024-12-07 10:09:58.843419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.180 [2024-12-07 10:09:58.852492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.180 [2024-12-07 10:09:58.852911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.180 [2024-12-07 10:09:58.852927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.180 [2024-12-07 10:09:58.852934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.180 [2024-12-07 10:09:58.853538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.180 [2024-12-07 10:09:58.853712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.180 [2024-12-07 10:09:58.853720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.180 [2024-12-07 10:09:58.853728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.180 [2024-12-07 10:09:58.856530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.180 [2024-12-07 10:09:58.865628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.180 [2024-12-07 10:09:58.866075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.180 [2024-12-07 10:09:58.866094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.180 [2024-12-07 10:09:58.866103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.180 [2024-12-07 10:09:58.866290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.180 [2024-12-07 10:09:58.866462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.180 [2024-12-07 10:09:58.866471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.180 [2024-12-07 10:09:58.866479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.180 [2024-12-07 10:09:58.869258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.180 [2024-12-07 10:09:58.878623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.180 [2024-12-07 10:09:58.879062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.180 [2024-12-07 10:09:58.879101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.180 [2024-12-07 10:09:58.879126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.180 [2024-12-07 10:09:58.879685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.180 [2024-12-07 10:09:58.879848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.180 [2024-12-07 10:09:58.879855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.180 [2024-12-07 10:09:58.879861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.180 [2024-12-07 10:09:58.882554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.180 [2024-12-07 10:09:58.891462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.180 [2024-12-07 10:09:58.891793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.180 [2024-12-07 10:09:58.891809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.180 [2024-12-07 10:09:58.891816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.180 [2024-12-07 10:09:58.892000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.180 [2024-12-07 10:09:58.892172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.180 [2024-12-07 10:09:58.892180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.180 [2024-12-07 10:09:58.892186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.180 [2024-12-07 10:09:58.894858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.439 [2024-12-07 10:09:58.904420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.439 [2024-12-07 10:09:58.904849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.439 [2024-12-07 10:09:58.904865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.439 [2024-12-07 10:09:58.904872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.439 [2024-12-07 10:09:58.905067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.439 [2024-12-07 10:09:58.905245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.439 [2024-12-07 10:09:58.905253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.439 [2024-12-07 10:09:58.905263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.439 [2024-12-07 10:09:58.908094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.440 [2024-12-07 10:09:58.917327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.440 [2024-12-07 10:09:58.917761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.440 [2024-12-07 10:09:58.917778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.440 [2024-12-07 10:09:58.917786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.440 [2024-12-07 10:09:58.917963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.440 [2024-12-07 10:09:58.918136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.440 [2024-12-07 10:09:58.918144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.440 [2024-12-07 10:09:58.918151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.440 [2024-12-07 10:09:58.920809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.440 [2024-12-07 10:09:58.930172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.440 [2024-12-07 10:09:58.930592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.440 [2024-12-07 10:09:58.930607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.440 [2024-12-07 10:09:58.930614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.440 [2024-12-07 10:09:58.930776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.440 [2024-12-07 10:09:58.930939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.440 [2024-12-07 10:09:58.930946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.440 [2024-12-07 10:09:58.930958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.440 [2024-12-07 10:09:58.933646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.440 [2024-12-07 10:09:58.943020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.440 [2024-12-07 10:09:58.943452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.440 [2024-12-07 10:09:58.943484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.440 [2024-12-07 10:09:58.943509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.440 [2024-12-07 10:09:58.944102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.440 [2024-12-07 10:09:58.944386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.440 [2024-12-07 10:09:58.944393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.440 [2024-12-07 10:09:58.944400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.440 [2024-12-07 10:09:58.947078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.440 [2024-12-07 10:09:58.955850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.440 [2024-12-07 10:09:58.956281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.440 [2024-12-07 10:09:58.956301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.440 [2024-12-07 10:09:58.956309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.440 [2024-12-07 10:09:58.956480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.440 [2024-12-07 10:09:58.956654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.440 [2024-12-07 10:09:58.956662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.440 [2024-12-07 10:09:58.956668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.440 [2024-12-07 10:09:58.959364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.440 [2024-12-07 10:09:58.968760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.440 [2024-12-07 10:09:58.969203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.440 [2024-12-07 10:09:58.969238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.440 [2024-12-07 10:09:58.969262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.440 [2024-12-07 10:09:58.969841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.440 [2024-12-07 10:09:58.970341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.440 [2024-12-07 10:09:58.970350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.440 [2024-12-07 10:09:58.970356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.440 [2024-12-07 10:09:58.973030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.440 [2024-12-07 10:09:58.981686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.440 [2024-12-07 10:09:58.982122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.440 [2024-12-07 10:09:58.982139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.440 [2024-12-07 10:09:58.982147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.440 [2024-12-07 10:09:58.982321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.440 [2024-12-07 10:09:58.982483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.440 [2024-12-07 10:09:58.982491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.440 [2024-12-07 10:09:58.982497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.440 [2024-12-07 10:09:58.985187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.440 [2024-12-07 10:09:58.994554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.440 [2024-12-07 10:09:58.994981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.440 [2024-12-07 10:09:58.995013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.440 [2024-12-07 10:09:58.995037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.440 [2024-12-07 10:09:58.995615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.440 [2024-12-07 10:09:58.995844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.440 [2024-12-07 10:09:58.995852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.440 [2024-12-07 10:09:58.995859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.440 [2024-12-07 10:09:58.998540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.440 [2024-12-07 10:09:59.007447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.440 [2024-12-07 10:09:59.007875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.440 [2024-12-07 10:09:59.007892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.440 [2024-12-07 10:09:59.007899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.440 [2024-12-07 10:09:59.008077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.440 [2024-12-07 10:09:59.008249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.440 [2024-12-07 10:09:59.008257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.440 [2024-12-07 10:09:59.008263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.440 [2024-12-07 10:09:59.010933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.440 [2024-12-07 10:09:59.020304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.440 [2024-12-07 10:09:59.020714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.440 [2024-12-07 10:09:59.020762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.440 [2024-12-07 10:09:59.020786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.440 [2024-12-07 10:09:59.021378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.440 [2024-12-07 10:09:59.021973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.440 [2024-12-07 10:09:59.021998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.440 [2024-12-07 10:09:59.022005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.440 [2024-12-07 10:09:59.024676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.440 [2024-12-07 10:09:59.033294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.440 [2024-12-07 10:09:59.033717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.440 [2024-12-07 10:09:59.033762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.440 [2024-12-07 10:09:59.033785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.440 [2024-12-07 10:09:59.034240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.440 [2024-12-07 10:09:59.034412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.440 [2024-12-07 10:09:59.034420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.441 [2024-12-07 10:09:59.034427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.441 [2024-12-07 10:09:59.037101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.441 [2024-12-07 10:09:59.046115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.441 [2024-12-07 10:09:59.046555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.441 [2024-12-07 10:09:59.046598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.441 [2024-12-07 10:09:59.046621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.441 [2024-12-07 10:09:59.047125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.441 [2024-12-07 10:09:59.047298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.441 [2024-12-07 10:09:59.047306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.441 [2024-12-07 10:09:59.047312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.441 [2024-12-07 10:09:59.049987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.441 [2024-12-07 10:09:59.058991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.441 [2024-12-07 10:09:59.059426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.441 [2024-12-07 10:09:59.059442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.441 [2024-12-07 10:09:59.059449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.441 [2024-12-07 10:09:59.059620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.441 [2024-12-07 10:09:59.059792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.441 [2024-12-07 10:09:59.059799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.441 [2024-12-07 10:09:59.059806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.441 [2024-12-07 10:09:59.062489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.441 [2024-12-07 10:09:59.071839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.441 [2024-12-07 10:09:59.072255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.441 [2024-12-07 10:09:59.072298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.441 [2024-12-07 10:09:59.072322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.441 [2024-12-07 10:09:59.072900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.441 [2024-12-07 10:09:59.073326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.441 [2024-12-07 10:09:59.073334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.441 [2024-12-07 10:09:59.073341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.441 [2024-12-07 10:09:59.076082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.441 [2024-12-07 10:09:59.084655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.441 [2024-12-07 10:09:59.085090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.441 [2024-12-07 10:09:59.085125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.441 [2024-12-07 10:09:59.085157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.441 [2024-12-07 10:09:59.085736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.441 [2024-12-07 10:09:59.086260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.441 [2024-12-07 10:09:59.086268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.441 [2024-12-07 10:09:59.086275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.441 [2024-12-07 10:09:59.090143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.441 [2024-12-07 10:09:59.098206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.441 [2024-12-07 10:09:59.098638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.441 [2024-12-07 10:09:59.098655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.441 [2024-12-07 10:09:59.098662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.441 [2024-12-07 10:09:59.098833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.441 [2024-12-07 10:09:59.099012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.441 [2024-12-07 10:09:59.099020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.441 [2024-12-07 10:09:59.099027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.441 [2024-12-07 10:09:59.101735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.441 [2024-12-07 10:09:59.110998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.441 [2024-12-07 10:09:59.111431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.441 [2024-12-07 10:09:59.111447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.441 [2024-12-07 10:09:59.111455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.441 [2024-12-07 10:09:59.111627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.441 [2024-12-07 10:09:59.111799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.441 [2024-12-07 10:09:59.111807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.441 [2024-12-07 10:09:59.111814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.441 [2024-12-07 10:09:59.114574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.441 [2024-12-07 10:09:59.124198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.441 [2024-12-07 10:09:59.124641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.441 [2024-12-07 10:09:59.124658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.441 [2024-12-07 10:09:59.124666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.441 [2024-12-07 10:09:59.124843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.441 [2024-12-07 10:09:59.125029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.441 [2024-12-07 10:09:59.125041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.441 [2024-12-07 10:09:59.125049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.441 [2024-12-07 10:09:59.127838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.441 [2024-12-07 10:09:59.137195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.441 [2024-12-07 10:09:59.137671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.441 [2024-12-07 10:09:59.137718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.441 [2024-12-07 10:09:59.137742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.441 [2024-12-07 10:09:59.138337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.441 [2024-12-07 10:09:59.138831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.441 [2024-12-07 10:09:59.138840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.441 [2024-12-07 10:09:59.138847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.441 [2024-12-07 10:09:59.141523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.441 [2024-12-07 10:09:59.150216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.441 [2024-12-07 10:09:59.150560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.441 [2024-12-07 10:09:59.150576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.441 [2024-12-07 10:09:59.150584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.441 [2024-12-07 10:09:59.150755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.441 [2024-12-07 10:09:59.150927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.441 [2024-12-07 10:09:59.150934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.441 [2024-12-07 10:09:59.150941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.441 [2024-12-07 10:09:59.153727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.700 [2024-12-07 10:09:59.163313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.700 [2024-12-07 10:09:59.163744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.700 [2024-12-07 10:09:59.163785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.700 [2024-12-07 10:09:59.163810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.700 [2024-12-07 10:09:59.164374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.700 [2024-12-07 10:09:59.164547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.700 [2024-12-07 10:09:59.164554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.700 [2024-12-07 10:09:59.164561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.700 [2024-12-07 10:09:59.167360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.700 [2024-12-07 10:09:59.176305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.700 [2024-12-07 10:09:59.176759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.700 [2024-12-07 10:09:59.176775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.701 [2024-12-07 10:09:59.176782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.701 [2024-12-07 10:09:59.176959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.701 [2024-12-07 10:09:59.177131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.701 [2024-12-07 10:09:59.177139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.701 [2024-12-07 10:09:59.177146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.701 [2024-12-07 10:09:59.179888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.701 5671.80 IOPS, 22.16 MiB/s [2024-12-07T09:09:59.427Z] [2024-12-07 10:09:59.189326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.701 [2024-12-07 10:09:59.189760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.701 [2024-12-07 10:09:59.189776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.701 [2024-12-07 10:09:59.189812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.701 [2024-12-07 10:09:59.190408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.701 [2024-12-07 10:09:59.190939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.701 [2024-12-07 10:09:59.190950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.701 [2024-12-07 10:09:59.190958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.701 [2024-12-07 10:09:59.193629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.701 [2024-12-07 10:09:59.202373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.701 [2024-12-07 10:09:59.202832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.701 [2024-12-07 10:09:59.202847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.701 [2024-12-07 10:09:59.202854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.701 [2024-12-07 10:09:59.203042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.701 [2024-12-07 10:09:59.203215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.701 [2024-12-07 10:09:59.203223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.701 [2024-12-07 10:09:59.203229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.701 [2024-12-07 10:09:59.205887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.701 [2024-12-07 10:09:59.215233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.701 [2024-12-07 10:09:59.215673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.701 [2024-12-07 10:09:59.215718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.701 [2024-12-07 10:09:59.215741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.701 [2024-12-07 10:09:59.216311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.701 [2024-12-07 10:09:59.216484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.701 [2024-12-07 10:09:59.216492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.701 [2024-12-07 10:09:59.216498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.701 [2024-12-07 10:09:59.219227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.701 [2024-12-07 10:09:59.228057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.701 [2024-12-07 10:09:59.228424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.701 [2024-12-07 10:09:59.228440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.701 [2024-12-07 10:09:59.228448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.701 [2024-12-07 10:09:59.228619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.701 [2024-12-07 10:09:59.228791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.701 [2024-12-07 10:09:59.228798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.701 [2024-12-07 10:09:59.228805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.701 [2024-12-07 10:09:59.231480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.701 [2024-12-07 10:09:59.241018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.701 [2024-12-07 10:09:59.241443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.701 [2024-12-07 10:09:59.241479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.701 [2024-12-07 10:09:59.241504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.701 [2024-12-07 10:09:59.242096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.701 [2024-12-07 10:09:59.242567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.701 [2024-12-07 10:09:59.242575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.701 [2024-12-07 10:09:59.242582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.701 [2024-12-07 10:09:59.245403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.701 [2024-12-07 10:09:59.254106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.701 [2024-12-07 10:09:59.254466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.701 [2024-12-07 10:09:59.254483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.701 [2024-12-07 10:09:59.254490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.701 [2024-12-07 10:09:59.254662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.701 [2024-12-07 10:09:59.254834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.701 [2024-12-07 10:09:59.254842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.701 [2024-12-07 10:09:59.254852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.701 [2024-12-07 10:09:59.257535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.701 [2024-12-07 10:09:59.267024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.701 [2024-12-07 10:09:59.267480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.701 [2024-12-07 10:09:59.267496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.701 [2024-12-07 10:09:59.267503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.701 [2024-12-07 10:09:59.267674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.701 [2024-12-07 10:09:59.267847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.701 [2024-12-07 10:09:59.267854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.701 [2024-12-07 10:09:59.267861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.701 [2024-12-07 10:09:59.270542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.701 [2024-12-07 10:09:59.279915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.701 [2024-12-07 10:09:59.280351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.701 [2024-12-07 10:09:59.280368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.701 [2024-12-07 10:09:59.280376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.701 [2024-12-07 10:09:59.280546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.701 [2024-12-07 10:09:59.280718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.701 [2024-12-07 10:09:59.280726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.701 [2024-12-07 10:09:59.280733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.701 [2024-12-07 10:09:59.283420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.701 [2024-12-07 10:09:59.292737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.701 [2024-12-07 10:09:59.293212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.701 [2024-12-07 10:09:59.293256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.701 [2024-12-07 10:09:59.293280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.701 [2024-12-07 10:09:59.293710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.701 [2024-12-07 10:09:59.293882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.701 [2024-12-07 10:09:59.293890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.701 [2024-12-07 10:09:59.293896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.701 [2024-12-07 10:09:59.296574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.702 [2024-12-07 10:09:59.305643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.702 [2024-12-07 10:09:59.306100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.702 [2024-12-07 10:09:59.306116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.702 [2024-12-07 10:09:59.306123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.702 [2024-12-07 10:09:59.306285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.702 [2024-12-07 10:09:59.306447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.702 [2024-12-07 10:09:59.306454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.702 [2024-12-07 10:09:59.306460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.702 [2024-12-07 10:09:59.309122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.702 [2024-12-07 10:09:59.318456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.702 [2024-12-07 10:09:59.318899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.702 [2024-12-07 10:09:59.318944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.702 [2024-12-07 10:09:59.318985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.702 [2024-12-07 10:09:59.319564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.702 [2024-12-07 10:09:59.319892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.702 [2024-12-07 10:09:59.319899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.702 [2024-12-07 10:09:59.319906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.702 [2024-12-07 10:09:59.322581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.702 [2024-12-07 10:09:59.331265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.702 [2024-12-07 10:09:59.331696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.702 [2024-12-07 10:09:59.331712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.702 [2024-12-07 10:09:59.331719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.702 [2024-12-07 10:09:59.331891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.702 [2024-12-07 10:09:59.332068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.702 [2024-12-07 10:09:59.332077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.702 [2024-12-07 10:09:59.332083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.702 [2024-12-07 10:09:59.334753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.702 [2024-12-07 10:09:59.344183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.702 [2024-12-07 10:09:59.344655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.702 [2024-12-07 10:09:59.344699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.702 [2024-12-07 10:09:59.344722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.702 [2024-12-07 10:09:59.345323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.702 [2024-12-07 10:09:59.345897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.702 [2024-12-07 10:09:59.345905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.702 [2024-12-07 10:09:59.345912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.702 [2024-12-07 10:09:59.348652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.702 [2024-12-07 10:09:59.357169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.702 [2024-12-07 10:09:59.357619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.702 [2024-12-07 10:09:59.357663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.702 [2024-12-07 10:09:59.357686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.702 [2024-12-07 10:09:59.358129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.702 [2024-12-07 10:09:59.358302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.702 [2024-12-07 10:09:59.358309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.702 [2024-12-07 10:09:59.358316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.702 [2024-12-07 10:09:59.361028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.702 [2024-12-07 10:09:59.370138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.702 [2024-12-07 10:09:59.370560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.702 [2024-12-07 10:09:59.370592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.702 [2024-12-07 10:09:59.370617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.702 [2024-12-07 10:09:59.371206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.702 [2024-12-07 10:09:59.371793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.702 [2024-12-07 10:09:59.371819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.702 [2024-12-07 10:09:59.371839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.702 [2024-12-07 10:09:59.374668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.702 [2024-12-07 10:09:59.383086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.702 [2024-12-07 10:09:59.383464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.702 [2024-12-07 10:09:59.383480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.702 [2024-12-07 10:09:59.383487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.702 [2024-12-07 10:09:59.383659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.702 [2024-12-07 10:09:59.383831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.702 [2024-12-07 10:09:59.383839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.702 [2024-12-07 10:09:59.383849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.702 [2024-12-07 10:09:59.386533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.702 [2024-12-07 10:09:59.396026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.702 [2024-12-07 10:09:59.396373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.702 [2024-12-07 10:09:59.396410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.702 [2024-12-07 10:09:59.396434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.702 [2024-12-07 10:09:59.396984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.702 [2024-12-07 10:09:59.397157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.702 [2024-12-07 10:09:59.397165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.702 [2024-12-07 10:09:59.397172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.702 [2024-12-07 10:09:59.399843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.702 [2024-12-07 10:09:59.408927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.702 [2024-12-07 10:09:59.409368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.702 [2024-12-07 10:09:59.409384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.702 [2024-12-07 10:09:59.409392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.702 [2024-12-07 10:09:59.409563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.702 [2024-12-07 10:09:59.409735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.702 [2024-12-07 10:09:59.409742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.702 [2024-12-07 10:09:59.409749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.702 [2024-12-07 10:09:59.412381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.702 [2024-12-07 10:09:59.422079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.702 [2024-12-07 10:09:59.422525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.702 [2024-12-07 10:09:59.422541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.702 [2024-12-07 10:09:59.422548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.702 [2024-12-07 10:09:59.422725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.702 [2024-12-07 10:09:59.422902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.702 [2024-12-07 10:09:59.422910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.702 [2024-12-07 10:09:59.422916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.962 [2024-12-07 10:09:59.425707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.962 [2024-12-07 10:09:59.435106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.962 [2024-12-07 10:09:59.435541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-12-07 10:09:59.435597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.962 [2024-12-07 10:09:59.435620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.962 [2024-12-07 10:09:59.436049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.962 [2024-12-07 10:09:59.436222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.962 [2024-12-07 10:09:59.436230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.962 [2024-12-07 10:09:59.436236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.962 [2024-12-07 10:09:59.438904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.962 [2024-12-07 10:09:59.448125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.962 [2024-12-07 10:09:59.448553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-12-07 10:09:59.448569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.962 [2024-12-07 10:09:59.448577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.962 [2024-12-07 10:09:59.448748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.962 [2024-12-07 10:09:59.448921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.962 [2024-12-07 10:09:59.448928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.962 [2024-12-07 10:09:59.448935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.962 [2024-12-07 10:09:59.451662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.962 [2024-12-07 10:09:59.461030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.962 [2024-12-07 10:09:59.461471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-12-07 10:09:59.461515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.962 [2024-12-07 10:09:59.461538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.962 [2024-12-07 10:09:59.462007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.962 [2024-12-07 10:09:59.462181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.962 [2024-12-07 10:09:59.462188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.962 [2024-12-07 10:09:59.462195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.962 [2024-12-07 10:09:59.464960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.962 [2024-12-07 10:09:59.473832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.962 [2024-12-07 10:09:59.474208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-12-07 10:09:59.474225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.962 [2024-12-07 10:09:59.474232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.962 [2024-12-07 10:09:59.474403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.962 [2024-12-07 10:09:59.474578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.962 [2024-12-07 10:09:59.474586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.962 [2024-12-07 10:09:59.474592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.962 [2024-12-07 10:09:59.477399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.962 [2024-12-07 10:09:59.486622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.962 [2024-12-07 10:09:59.487050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-12-07 10:09:59.487094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.962 [2024-12-07 10:09:59.487118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.962 [2024-12-07 10:09:59.487696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.962 [2024-12-07 10:09:59.488218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.962 [2024-12-07 10:09:59.488226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.962 [2024-12-07 10:09:59.488232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.962 [2024-12-07 10:09:59.490976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.962 [2024-12-07 10:09:59.499545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.962 [2024-12-07 10:09:59.499977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-12-07 10:09:59.499994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.962 [2024-12-07 10:09:59.500001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.962 [2024-12-07 10:09:59.500162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.962 [2024-12-07 10:09:59.500325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.962 [2024-12-07 10:09:59.500332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.962 [2024-12-07 10:09:59.500338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.962 [2024-12-07 10:09:59.503030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.962 [2024-12-07 10:09:59.512404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.962 [2024-12-07 10:09:59.512862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-12-07 10:09:59.512906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.962 [2024-12-07 10:09:59.512928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.962 [2024-12-07 10:09:59.513297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.962 [2024-12-07 10:09:59.513470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.962 [2024-12-07 10:09:59.513477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.962 [2024-12-07 10:09:59.513484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.962 [2024-12-07 10:09:59.516160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.962 [2024-12-07 10:09:59.525246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.962 [2024-12-07 10:09:59.525680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-12-07 10:09:59.525696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.962 [2024-12-07 10:09:59.525703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.962 [2024-12-07 10:09:59.525865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.962 [2024-12-07 10:09:59.526052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.962 [2024-12-07 10:09:59.526061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.962 [2024-12-07 10:09:59.526067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.962 [2024-12-07 10:09:59.528738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.962 [2024-12-07 10:09:59.538144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.962 [2024-12-07 10:09:59.538535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-12-07 10:09:59.538550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.962 [2024-12-07 10:09:59.538557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.962 [2024-12-07 10:09:59.538719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.962 [2024-12-07 10:09:59.538881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.962 [2024-12-07 10:09:59.538889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.962 [2024-12-07 10:09:59.538895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.963 [2024-12-07 10:09:59.541589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.963 [2024-12-07 10:09:59.551155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.963 [2024-12-07 10:09:59.551595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-12-07 10:09:59.551612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.963 [2024-12-07 10:09:59.551619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.963 [2024-12-07 10:09:59.551790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.963 [2024-12-07 10:09:59.551990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.963 [2024-12-07 10:09:59.551999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.963 [2024-12-07 10:09:59.552005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.963 [2024-12-07 10:09:59.554676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.963 [2024-12-07 10:09:59.564088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.963 [2024-12-07 10:09:59.564457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-12-07 10:09:59.564472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.963 [2024-12-07 10:09:59.564483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.963 [2024-12-07 10:09:59.564654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.963 [2024-12-07 10:09:59.564826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.963 [2024-12-07 10:09:59.564834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.963 [2024-12-07 10:09:59.564840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.963 [2024-12-07 10:09:59.567517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.963 [2024-12-07 10:09:59.577006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.963 [2024-12-07 10:09:59.577450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-12-07 10:09:59.577494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.963 [2024-12-07 10:09:59.577517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.963 [2024-12-07 10:09:59.578109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.963 [2024-12-07 10:09:59.578508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.963 [2024-12-07 10:09:59.578516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.963 [2024-12-07 10:09:59.578522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.963 [2024-12-07 10:09:59.581197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.963 [2024-12-07 10:09:59.589821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.963 [2024-12-07 10:09:59.590284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-12-07 10:09:59.590329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.963 [2024-12-07 10:09:59.590351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.963 [2024-12-07 10:09:59.590909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.963 [2024-12-07 10:09:59.591169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.963 [2024-12-07 10:09:59.591181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.963 [2024-12-07 10:09:59.591190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.963 [2024-12-07 10:09:59.595246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.963 [2024-12-07 10:09:59.603416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.963 [2024-12-07 10:09:59.603864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-12-07 10:09:59.603881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.963 [2024-12-07 10:09:59.603889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.963 [2024-12-07 10:09:59.604085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.963 [2024-12-07 10:09:59.604270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.963 [2024-12-07 10:09:59.604281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.963 [2024-12-07 10:09:59.604288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.963 [2024-12-07 10:09:59.607042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.963 [2024-12-07 10:09:59.616254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.963 [2024-12-07 10:09:59.616669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-12-07 10:09:59.616714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.963 [2024-12-07 10:09:59.616737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.963 [2024-12-07 10:09:59.617276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.963 [2024-12-07 10:09:59.617449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.963 [2024-12-07 10:09:59.617457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.963 [2024-12-07 10:09:59.617463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.963 [2024-12-07 10:09:59.620198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.963 [2024-12-07 10:09:59.629248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.963 [2024-12-07 10:09:59.629685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-12-07 10:09:59.629702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.963 [2024-12-07 10:09:59.629711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.963 [2024-12-07 10:09:59.629895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.963 [2024-12-07 10:09:59.630072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.963 [2024-12-07 10:09:59.630082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.963 [2024-12-07 10:09:59.630089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.963 [2024-12-07 10:09:59.632871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.963 [2024-12-07 10:09:59.642197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.963 [2024-12-07 10:09:59.642642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-12-07 10:09:59.642672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.963 [2024-12-07 10:09:59.642695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.963 [2024-12-07 10:09:59.643290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.963 [2024-12-07 10:09:59.643536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.963 [2024-12-07 10:09:59.643543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.963 [2024-12-07 10:09:59.643550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.963 [2024-12-07 10:09:59.646311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.963 [2024-12-07 10:09:59.655107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.963 [2024-12-07 10:09:59.655599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-12-07 10:09:59.655643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.963 [2024-12-07 10:09:59.655666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.963 [2024-12-07 10:09:59.656158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.963 [2024-12-07 10:09:59.656331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.963 [2024-12-07 10:09:59.656339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.963 [2024-12-07 10:09:59.656346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.963 [2024-12-07 10:09:59.659081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.963 [2024-12-07 10:09:59.668042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.963 [2024-12-07 10:09:59.668472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-12-07 10:09:59.668516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.963 [2024-12-07 10:09:59.668539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.963 [2024-12-07 10:09:59.669043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.963 [2024-12-07 10:09:59.669216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.963 [2024-12-07 10:09:59.669224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.963 [2024-12-07 10:09:59.669230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.964 [2024-12-07 10:09:59.671974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.964 [2024-12-07 10:09:59.680953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.964 [2024-12-07 10:09:59.681422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-12-07 10:09:59.681465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:30.964 [2024-12-07 10:09:59.681488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:30.964 [2024-12-07 10:09:59.682082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:30.964 [2024-12-07 10:09:59.682422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.964 [2024-12-07 10:09:59.682433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.964 [2024-12-07 10:09:59.682442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.223 [2024-12-07 10:09:59.686499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.223 [2024-12-07 10:09:59.694423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.223 [2024-12-07 10:09:59.694893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.223 [2024-12-07 10:09:59.694937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.223 [2024-12-07 10:09:59.694975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.223 [2024-12-07 10:09:59.695562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.223 [2024-12-07 10:09:59.696100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.223 [2024-12-07 10:09:59.696109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.223 [2024-12-07 10:09:59.696115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.223 [2024-12-07 10:09:59.698824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.223 [2024-12-07 10:09:59.707252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.223 [2024-12-07 10:09:59.707678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.223 [2024-12-07 10:09:59.707721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.223 [2024-12-07 10:09:59.707743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.223 [2024-12-07 10:09:59.708337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.223 [2024-12-07 10:09:59.708800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.223 [2024-12-07 10:09:59.708808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.223 [2024-12-07 10:09:59.708814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.223 [2024-12-07 10:09:59.711440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.223 [2024-12-07 10:09:59.720100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.223 [2024-12-07 10:09:59.720545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.223 [2024-12-07 10:09:59.720561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.223 [2024-12-07 10:09:59.720568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.223 [2024-12-07 10:09:59.720739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.223 [2024-12-07 10:09:59.720912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.223 [2024-12-07 10:09:59.720919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.223 [2024-12-07 10:09:59.720926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.223 [2024-12-07 10:09:59.723641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.223 [2024-12-07 10:09:59.733025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.223 [2024-12-07 10:09:59.733462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.223 [2024-12-07 10:09:59.733478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.223 [2024-12-07 10:09:59.733486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.223 [2024-12-07 10:09:59.733657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.223 [2024-12-07 10:09:59.733829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.223 [2024-12-07 10:09:59.733837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.223 [2024-12-07 10:09:59.733847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.223 [2024-12-07 10:09:59.736524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.223 [2024-12-07 10:09:59.745859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.223 [2024-12-07 10:09:59.746324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.223 [2024-12-07 10:09:59.746367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.223 [2024-12-07 10:09:59.746390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.223 [2024-12-07 10:09:59.746877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.223 [2024-12-07 10:09:59.747059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.223 [2024-12-07 10:09:59.747068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.223 [2024-12-07 10:09:59.747074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.223 [2024-12-07 10:09:59.749743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.223 [2024-12-07 10:09:59.758657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.223 [2024-12-07 10:09:59.759042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.223 [2024-12-07 10:09:59.759059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.223 [2024-12-07 10:09:59.759067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.223 [2024-12-07 10:09:59.759248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.223 [2024-12-07 10:09:59.759410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.223 [2024-12-07 10:09:59.759417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.223 [2024-12-07 10:09:59.759423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.223 [2024-12-07 10:09:59.762113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.223 [2024-12-07 10:09:59.771592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.223 [2024-12-07 10:09:59.771929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.223 [2024-12-07 10:09:59.771945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.223 [2024-12-07 10:09:59.771958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.223 [2024-12-07 10:09:59.772129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.224 [2024-12-07 10:09:59.772302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.224 [2024-12-07 10:09:59.772309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.224 [2024-12-07 10:09:59.772316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.224 [2024-12-07 10:09:59.774991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.224 [2024-12-07 10:09:59.784467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.224 [2024-12-07 10:09:59.784891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.224 [2024-12-07 10:09:59.784907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.224 [2024-12-07 10:09:59.784940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.224 [2024-12-07 10:09:59.785537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.224 [2024-12-07 10:09:59.786098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.224 [2024-12-07 10:09:59.786106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.224 [2024-12-07 10:09:59.786112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.224 [2024-12-07 10:09:59.788879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.224 [2024-12-07 10:09:59.797381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.224 [2024-12-07 10:09:59.797746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.224 [2024-12-07 10:09:59.797762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.224 [2024-12-07 10:09:59.797770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.224 [2024-12-07 10:09:59.797941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.224 [2024-12-07 10:09:59.798118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.224 [2024-12-07 10:09:59.798126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.224 [2024-12-07 10:09:59.798133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.224 [2024-12-07 10:09:59.800853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1465676 Killed "${NVMF_APP[@]}" "$@" 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.224 [2024-12-07 10:09:59.810499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.224 [2024-12-07 10:09:59.810921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.224 [2024-12-07 10:09:59.810936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.224 [2024-12-07 10:09:59.810944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.224 [2024-12-07 10:09:59.811126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.224 [2024-12-07 10:09:59.811304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.224 [2024-12-07 10:09:59.811312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.224 [2024-12-07 10:09:59.811320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=1466877 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 1466877 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1466877 ']' 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.224 [2024-12-07 10:09:59.814153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:31.224 10:09:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.224 [2024-12-07 10:09:59.823685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.224 [2024-12-07 10:09:59.824062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.224 [2024-12-07 10:09:59.824079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.224 [2024-12-07 10:09:59.824087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.224 [2024-12-07 10:09:59.824264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.224 [2024-12-07 10:09:59.824442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.224 [2024-12-07 10:09:59.824449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.224 [2024-12-07 10:09:59.824456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.224 [2024-12-07 10:09:59.827297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.224 [2024-12-07 10:09:59.836834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.224 [2024-12-07 10:09:59.837284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.224 [2024-12-07 10:09:59.837301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.224 [2024-12-07 10:09:59.837308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.224 [2024-12-07 10:09:59.837485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.224 [2024-12-07 10:09:59.837662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.224 [2024-12-07 10:09:59.837671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.224 [2024-12-07 10:09:59.837677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.224 [2024-12-07 10:09:59.840510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.224 [2024-12-07 10:09:59.849854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.224 [2024-12-07 10:09:59.850243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.224 [2024-12-07 10:09:59.850260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.224 [2024-12-07 10:09:59.850267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.224 [2024-12-07 10:09:59.850447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.224 [2024-12-07 10:09:59.850625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.224 [2024-12-07 10:09:59.850632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.224 [2024-12-07 10:09:59.850639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.224 [2024-12-07 10:09:59.853510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.224 [2024-12-07 10:09:59.858986] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:31.224 [2024-12-07 10:09:59.859025] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:31.224 [2024-12-07 10:09:59.862909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.224 [2024-12-07 10:09:59.863224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.224 [2024-12-07 10:09:59.863241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.224 [2024-12-07 10:09:59.863249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.224 [2024-12-07 10:09:59.863425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.224 [2024-12-07 10:09:59.863602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.224 [2024-12-07 10:09:59.863610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.224 [2024-12-07 10:09:59.863618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.224 [2024-12-07 10:09:59.866450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.224 [2024-12-07 10:09:59.876137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.224 [2024-12-07 10:09:59.876448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.224 [2024-12-07 10:09:59.876465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.224 [2024-12-07 10:09:59.876473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.224 [2024-12-07 10:09:59.876650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.224 [2024-12-07 10:09:59.876829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.224 [2024-12-07 10:09:59.876837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.225 [2024-12-07 10:09:59.876844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.225 [2024-12-07 10:09:59.879717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.225 [2024-12-07 10:09:59.889191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.225 [2024-12-07 10:09:59.889615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.225 [2024-12-07 10:09:59.889632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.225 [2024-12-07 10:09:59.889640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.225 [2024-12-07 10:09:59.889817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.225 [2024-12-07 10:09:59.890003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.225 [2024-12-07 10:09:59.890012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.225 [2024-12-07 10:09:59.890019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.225 [2024-12-07 10:09:59.892848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.225 [2024-12-07 10:09:59.902384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.225 [2024-12-07 10:09:59.902711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.225 [2024-12-07 10:09:59.902728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.225 [2024-12-07 10:09:59.902735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.225 [2024-12-07 10:09:59.902912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.225 [2024-12-07 10:09:59.903096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.225 [2024-12-07 10:09:59.903104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.225 [2024-12-07 10:09:59.903111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.225 [2024-12-07 10:09:59.905945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.225 [2024-12-07 10:09:59.915540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.225 [2024-12-07 10:09:59.915961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.225 [2024-12-07 10:09:59.915978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.225 [2024-12-07 10:09:59.915985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.225 [2024-12-07 10:09:59.916162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.225 [2024-12-07 10:09:59.916339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.225 [2024-12-07 10:09:59.916347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.225 [2024-12-07 10:09:59.916354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.225 [2024-12-07 10:09:59.919222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.225 [2024-12-07 10:09:59.919499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:31.225 [2024-12-07 10:09:59.928545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.225 [2024-12-07 10:09:59.928964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.225 [2024-12-07 10:09:59.928985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.225 [2024-12-07 10:09:59.928995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.225 [2024-12-07 10:09:59.929172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.225 [2024-12-07 10:09:59.929351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.225 [2024-12-07 10:09:59.929359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.225 [2024-12-07 10:09:59.929371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.225 [2024-12-07 10:09:59.932189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.225 [2024-12-07 10:09:59.941620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.225 [2024-12-07 10:09:59.942021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.225 [2024-12-07 10:09:59.942040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.225 [2024-12-07 10:09:59.942050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.225 [2024-12-07 10:09:59.942227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.225 [2024-12-07 10:09:59.942407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.225 [2024-12-07 10:09:59.942415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.225 [2024-12-07 10:09:59.942423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.225 [2024-12-07 10:09:59.945256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.484 [2024-12-07 10:09:59.954786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.484 [2024-12-07 10:09:59.955167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.484 [2024-12-07 10:09:59.955186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.484 [2024-12-07 10:09:59.955194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.484 [2024-12-07 10:09:59.955372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.484 [2024-12-07 10:09:59.955550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.484 [2024-12-07 10:09:59.955558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.484 [2024-12-07 10:09:59.955566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.484 [2024-12-07 10:09:59.958367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.484 [2024-12-07 10:09:59.961466] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:31.484 [2024-12-07 10:09:59.961492] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:31.484 [2024-12-07 10:09:59.961499] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:31.484 [2024-12-07 10:09:59.961505] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:31.484 [2024-12-07 10:09:59.961510] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:31.484 [2024-12-07 10:09:59.961564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:31.484 [2024-12-07 10:09:59.961782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:31.484 [2024-12-07 10:09:59.961783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.484 [2024-12-07 10:09:59.967938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.484 [2024-12-07 10:09:59.968337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.484 [2024-12-07 10:09:59.968357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.484 [2024-12-07 10:09:59.968366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.484 [2024-12-07 10:09:59.968550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.484 [2024-12-07 10:09:59.968730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.484 [2024-12-07 10:09:59.968738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.484 [2024-12-07 10:09:59.968746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.485 [2024-12-07 10:09:59.971580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.485 [2024-12-07 10:09:59.981138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.485 [2024-12-07 10:09:59.981520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.485 [2024-12-07 10:09:59.981540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.485 [2024-12-07 10:09:59.981551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.485 [2024-12-07 10:09:59.981730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.485 [2024-12-07 10:09:59.981909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.485 [2024-12-07 10:09:59.981919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.485 [2024-12-07 10:09:59.981927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.485 [2024-12-07 10:09:59.984766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.485 [2024-12-07 10:09:59.994302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.485 [2024-12-07 10:09:59.994722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.485 [2024-12-07 10:09:59.994742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.485 [2024-12-07 10:09:59.994752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.485 [2024-12-07 10:09:59.994930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.485 [2024-12-07 10:09:59.995116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.485 [2024-12-07 10:09:59.995126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.485 [2024-12-07 10:09:59.995133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.485 [2024-12-07 10:09:59.997966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.485 [2024-12-07 10:10:00.008394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.485 [2024-12-07 10:10:00.008758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.485 [2024-12-07 10:10:00.008781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.485 [2024-12-07 10:10:00.008793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.485 [2024-12-07 10:10:00.009010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.485 [2024-12-07 10:10:00.009189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.485 [2024-12-07 10:10:00.009197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.485 [2024-12-07 10:10:00.009210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.485 [2024-12-07 10:10:00.012045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.485 [2024-12-07 10:10:00.021585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.485 [2024-12-07 10:10:00.021966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.485 [2024-12-07 10:10:00.021987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.485 [2024-12-07 10:10:00.021997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.485 [2024-12-07 10:10:00.022177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.485 [2024-12-07 10:10:00.022356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.485 [2024-12-07 10:10:00.022366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.485 [2024-12-07 10:10:00.022374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.485 [2024-12-07 10:10:00.025208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.485 [2024-12-07 10:10:00.034893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.485 [2024-12-07 10:10:00.035231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.485 [2024-12-07 10:10:00.035252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.485 [2024-12-07 10:10:00.035262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.485 [2024-12-07 10:10:00.035441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.485 [2024-12-07 10:10:00.035620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.485 [2024-12-07 10:10:00.035629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.485 [2024-12-07 10:10:00.035637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.485 [2024-12-07 10:10:00.038472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.485 [2024-12-07 10:10:00.048017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.485 [2024-12-07 10:10:00.048384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.485 [2024-12-07 10:10:00.048402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.485 [2024-12-07 10:10:00.048411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.485 [2024-12-07 10:10:00.048589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.485 [2024-12-07 10:10:00.048768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.485 [2024-12-07 10:10:00.048776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.485 [2024-12-07 10:10:00.048783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.485 [2024-12-07 10:10:00.051618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.485 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:31.485 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:31.485 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:31.485 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:31.485 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.485 [2024-12-07 10:10:00.061153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.485 [2024-12-07 10:10:00.061521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.485 [2024-12-07 10:10:00.061539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.485 [2024-12-07 10:10:00.061547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.485 [2024-12-07 10:10:00.061724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.485 [2024-12-07 10:10:00.061904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.485 [2024-12-07 10:10:00.061912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.485 [2024-12-07 10:10:00.061919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.485 [2024-12-07 10:10:00.064752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.485 [2024-12-07 10:10:00.074475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.485 [2024-12-07 10:10:00.075014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.485 [2024-12-07 10:10:00.075038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.485 [2024-12-07 10:10:00.075051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.485 [2024-12-07 10:10:00.075451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.485 [2024-12-07 10:10:00.075724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.485 [2024-12-07 10:10:00.075739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.485 [2024-12-07 10:10:00.075749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.485 [2024-12-07 10:10:00.079085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.485 [2024-12-07 10:10:00.087620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.485 [2024-12-07 10:10:00.087943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.485 [2024-12-07 10:10:00.087969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.485 [2024-12-07 10:10:00.087977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.485 [2024-12-07 10:10:00.088156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.485 [2024-12-07 10:10:00.088336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.485 [2024-12-07 10:10:00.088345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.485 [2024-12-07 10:10:00.088352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.485 [2024-12-07 10:10:00.091185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.485 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:31.485 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:31.485 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.485 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.485 [2024-12-07 10:10:00.097551] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:31.485 [2024-12-07 10:10:00.100721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.486 [2024-12-07 10:10:00.101013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.486 [2024-12-07 10:10:00.101030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.486 [2024-12-07 10:10:00.101038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.486 [2024-12-07 10:10:00.101216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.486 [2024-12-07 10:10:00.101394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.486 [2024-12-07 10:10:00.101403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.486 [2024-12-07 10:10:00.101410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.486 [2024-12-07 10:10:00.104244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.486 [2024-12-07 10:10:00.113771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.486 [2024-12-07 10:10:00.114083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.486 [2024-12-07 10:10:00.114100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.486 [2024-12-07 10:10:00.114108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.486 [2024-12-07 10:10:00.114286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.486 [2024-12-07 10:10:00.114464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.486 [2024-12-07 10:10:00.114472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.486 [2024-12-07 10:10:00.114478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.486 [2024-12-07 10:10:00.117314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.486 [2024-12-07 10:10:00.126845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.486 [2024-12-07 10:10:00.127257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.486 [2024-12-07 10:10:00.127275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.486 [2024-12-07 10:10:00.127284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.486 [2024-12-07 10:10:00.127462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.486 [2024-12-07 10:10:00.127639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.486 [2024-12-07 10:10:00.127648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.486 [2024-12-07 10:10:00.127659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.486 [2024-12-07 10:10:00.130493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.486 Malloc0 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:31.486 [2024-12-07 10:10:00.140032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.486 [2024-12-07 10:10:00.140409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.486 [2024-12-07 10:10:00.140428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.486 [2024-12-07 10:10:00.140435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.486 [2024-12-07 10:10:00.140612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.486 [2024-12-07 10:10:00.140793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.486 [2024-12-07 10:10:00.140802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.486 [2024-12-07 10:10:00.140808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.486 [2024-12-07 10:10:00.143640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.486 [2024-12-07 10:10:00.153184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.486 [2024-12-07 10:10:00.153651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.486 [2024-12-07 10:10:00.153669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397780 with addr=10.0.0.2, port=4420 00:35:31.486 [2024-12-07 10:10:00.153677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397780 is same with the state(6) to be set 00:35:31.486 [2024-12-07 10:10:00.153854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397780 (9): Bad file descriptor 00:35:31.486 [2024-12-07 10:10:00.154038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.486 [2024-12-07 10:10:00.154048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.486 [2024-12-07 10:10:00.154055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.486 [2024-12-07 10:10:00.156881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.486 [2024-12-07 10:10:00.162536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:31.486 [2024-12-07 10:10:00.166249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.486 10:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1465937 00:35:31.745 4726.50 IOPS, 18.46 MiB/s [2024-12-07T09:10:00.471Z] [2024-12-07 10:10:00.315913] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:33.615 5382.29 IOPS, 21.02 MiB/s [2024-12-07T09:10:03.278Z] 6032.25 IOPS, 23.56 MiB/s [2024-12-07T09:10:04.211Z] 6561.44 IOPS, 25.63 MiB/s [2024-12-07T09:10:05.586Z] 6971.60 IOPS, 27.23 MiB/s [2024-12-07T09:10:06.517Z] 7321.91 IOPS, 28.60 MiB/s [2024-12-07T09:10:07.451Z] 7604.42 IOPS, 29.70 MiB/s [2024-12-07T09:10:08.391Z] 7841.54 IOPS, 30.63 MiB/s [2024-12-07T09:10:09.326Z] 8047.71 IOPS, 31.44 MiB/s [2024-12-07T09:10:09.326Z] 8233.40 IOPS, 32.16 MiB/s 00:35:40.600 Latency(us) 00:35:40.600 [2024-12-07T09:10:09.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.600 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:40.600 Verification LBA range: start 0x0 length 0x4000 00:35:40.600 Nvme1n1 : 15.05 8206.62 32.06 10973.11 0.00 6635.66 447.00 43994.60 00:35:40.600 [2024-12-07T09:10:09.326Z] =================================================================================================================== 00:35:40.600 [2024-12-07T09:10:09.326Z] Total : 8206.62 32.06 10973.11 0.00 6635.66 447.00 43994.60 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:40.859 rmmod nvme_tcp 00:35:40.859 rmmod nvme_fabrics 00:35:40.859 rmmod nvme_keyring 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 1466877 ']' 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 1466877 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1466877 ']' 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1466877 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1466877 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1466877' 00:35:40.859 killing process with pid 1466877 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1466877 00:35:40.859 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1466877 00:35:41.118 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:41.118 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:41.118 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:41.118 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:41.118 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:41.118 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:35:41.118 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:35:41.118 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:41.118 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:41.118 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.118 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:41.118 10:10:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.754 10:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:43.754 00:35:43.754 real 0m25.691s 00:35:43.754 user 1m0.943s 00:35:43.754 sys 0m6.407s 00:35:43.754 10:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:43.754 10:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.754 ************************************ 00:35:43.754 END TEST nvmf_bdevperf 00:35:43.754 ************************************ 00:35:43.754 10:10:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:43.754 10:10:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:43.754 10:10:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:43.754 10:10:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.754 ************************************ 00:35:43.754 START TEST nvmf_target_disconnect 00:35:43.754 ************************************ 00:35:43.754 10:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:43.754 * Looking for test storage... 00:35:43.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:43.754 10:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:43.754 10:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:35:43.754 10:10:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:43.754 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.755 --rc genhtml_branch_coverage=1 00:35:43.755 --rc genhtml_function_coverage=1 00:35:43.755 --rc genhtml_legend=1 00:35:43.755 --rc geninfo_all_blocks=1 00:35:43.755 --rc geninfo_unexecuted_blocks=1 00:35:43.755 00:35:43.755 ' 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.755 --rc genhtml_branch_coverage=1 00:35:43.755 --rc genhtml_function_coverage=1 00:35:43.755 --rc genhtml_legend=1 00:35:43.755 --rc geninfo_all_blocks=1 00:35:43.755 --rc geninfo_unexecuted_blocks=1 00:35:43.755 00:35:43.755 ' 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.755 --rc genhtml_branch_coverage=1 00:35:43.755 --rc genhtml_function_coverage=1 00:35:43.755 --rc genhtml_legend=1 00:35:43.755 --rc geninfo_all_blocks=1 00:35:43.755 --rc geninfo_unexecuted_blocks=1 00:35:43.755 00:35:43.755 ' 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.755 --rc genhtml_branch_coverage=1 00:35:43.755 --rc genhtml_function_coverage=1 00:35:43.755 --rc genhtml_legend=1 00:35:43.755 --rc geninfo_all_blocks=1 00:35:43.755 --rc geninfo_unexecuted_blocks=1 00:35:43.755 00:35:43.755 ' 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:43.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:43.755 10:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:49.060 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:49.061 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:49.061 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:49.061 Found net devices under 0000:86:00.0: cvl_0_0 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:49.061 Found net devices under 0000:86:00.1: cvl_0_1 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:49.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:49.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:35:49.061 00:35:49.061 --- 10.0.0.2 ping statistics --- 00:35:49.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:49.061 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:49.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:49.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:35:49.061 00:35:49.061 --- 10.0.0.1 ping statistics --- 00:35:49.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:49.061 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:49.061 ************************************ 00:35:49.061 START TEST nvmf_target_disconnect_tc1 00:35:49.061 ************************************ 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:49.061 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:49.062 [2024-12-07 10:10:17.744964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.062 [2024-12-07 10:10:17.745012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x828f30 with addr=10.0.0.2, port=4420 00:35:49.062 [2024-12-07 10:10:17.745037] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:49.062 [2024-12-07 10:10:17.745050] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:49.062 [2024-12-07 10:10:17.745057] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:49.062 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:49.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:49.062 Initializing NVMe Controllers 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:49.062 00:35:49.062 real 0m0.096s 00:35:49.062 user 0m0.038s 00:35:49.062 sys 0m0.057s 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:49.062 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:49.062 ************************************ 00:35:49.062 END TEST nvmf_target_disconnect_tc1 00:35:49.062 ************************************ 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:49.320 ************************************ 00:35:49.320 START TEST nvmf_target_disconnect_tc2 00:35:49.320 ************************************ 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=1472032 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 1472032 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:49.320 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1472032 ']' 00:35:49.321 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:49.321 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:49.321 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:49.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:49.321 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:49.321 10:10:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:49.321 [2024-12-07 10:10:17.881492] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:49.321 [2024-12-07 10:10:17.881533] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:49.321 [2024-12-07 10:10:17.953656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:49.321 [2024-12-07 10:10:17.993728] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:49.321 [2024-12-07 10:10:17.993770] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:49.321 [2024-12-07 10:10:17.993778] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:49.321 [2024-12-07 10:10:17.993784] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:49.321 [2024-12-07 10:10:17.993790] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:49.321 [2024-12-07 10:10:17.993907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:35:49.321 [2024-12-07 10:10:17.994032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:35:49.321 [2024-12-07 10:10:17.994119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:35:49.321 [2024-12-07 10:10:17.994120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:49.579 Malloc0 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:49.579 [2024-12-07 10:10:18.164802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.579 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:49.580 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.580 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:49.580 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.580 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:49.580 [2024-12-07 10:10:18.193072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.580 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.580 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:49.580 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.580 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:49.580 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.580 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1472060 00:35:49.580 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:49.580 10:10:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:51.527 10:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1472032 00:35:51.527 10:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:51.527 Read completed with error (sct=0, sc=8) 00:35:51.527 starting I/O failed 00:35:51.527 Read completed with error (sct=0, sc=8) 00:35:51.527 starting I/O failed 00:35:51.527 Read completed with error (sct=0, sc=8) 00:35:51.527 starting I/O failed 00:35:51.527 Read completed with error (sct=0, sc=8) 00:35:51.527 starting I/O failed 00:35:51.527 Read completed with error (sct=0, sc=8) 00:35:51.527 starting I/O failed 00:35:51.527 Write completed with error (sct=0, sc=8) 00:35:51.527 starting I/O failed 00:35:51.527 Write completed with error (sct=0, sc=8) 00:35:51.527 starting I/O failed 00:35:51.527 Write completed with error (sct=0, sc=8) 00:35:51.527 starting I/O failed 00:35:51.527 Write completed with error (sct=0, sc=8) 00:35:51.527 starting I/O failed 00:35:51.527 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 [2024-12-07 10:10:20.228414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 [2024-12-07 10:10:20.228618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Write completed with error (sct=0, sc=8) 00:35:51.528 starting I/O failed 00:35:51.528 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Write completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Write completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Write completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Write completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Write completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Write completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 [2024-12-07 10:10:20.228810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Write completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Write completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Write completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Write completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Write completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Write completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 Read completed with error (sct=0, sc=8) 00:35:51.529 starting I/O failed 00:35:51.529 [2024-12-07 10:10:20.229013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.529 [2024-12-07 10:10:20.229241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.529 [2024-12-07 10:10:20.229271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.529 qpair failed and we were unable to recover it. 00:35:51.529 [2024-12-07 10:10:20.229478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.529 [2024-12-07 10:10:20.229491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.529 qpair failed and we were unable to recover it. 00:35:51.529 [2024-12-07 10:10:20.229588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.529 [2024-12-07 10:10:20.229599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.529 qpair failed and we were unable to recover it. 00:35:51.529 [2024-12-07 10:10:20.229702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.529 [2024-12-07 10:10:20.229713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.529 qpair failed and we were unable to recover it. 00:35:51.529 [2024-12-07 10:10:20.229888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.529 [2024-12-07 10:10:20.229899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.529 qpair failed and we were unable to recover it. 00:35:51.529 [2024-12-07 10:10:20.230066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.529 [2024-12-07 10:10:20.230079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.529 qpair failed and we were unable to recover it. 00:35:51.529 [2024-12-07 10:10:20.230231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.529 [2024-12-07 10:10:20.230243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.529 qpair failed and we were unable to recover it. 00:35:51.529 [2024-12-07 10:10:20.230364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.529 [2024-12-07 10:10:20.230397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.529 qpair failed and we were unable to recover it. 00:35:51.529 [2024-12-07 10:10:20.230754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.529 [2024-12-07 10:10:20.230786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.529 qpair failed and we were unable to recover it. 00:35:51.529 [2024-12-07 10:10:20.231040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.529 [2024-12-07 10:10:20.231052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.529 qpair failed and we were unable to recover it. 00:35:51.529 [2024-12-07 10:10:20.231226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.529 [2024-12-07 10:10:20.231239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.529 qpair failed and we were unable to recover it. 00:35:51.529 [2024-12-07 10:10:20.231369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.529 [2024-12-07 10:10:20.231401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.529 qpair failed and we were unable to recover it. 00:35:51.529 [2024-12-07 10:10:20.231698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.231732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.231878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.231911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.232123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.232156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.232358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.232392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.232654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.232688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.232901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.232933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.233084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.233117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.233374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.233406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.233552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.233585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.233866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.233899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.234144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.234177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.234445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.234505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.234663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.234698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.234989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.235024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.235235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.235267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.235551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.235563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.235746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.235758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.235875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.235908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.236110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.236144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.236339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.236371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.236526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.236558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.236695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.236727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.236974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.237009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.237157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.237188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.237340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.237382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.237596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.237628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.237752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.237763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.237982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.530 [2024-12-07 10:10:20.238016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.530 qpair failed and we were unable to recover it. 00:35:51.530 [2024-12-07 10:10:20.238154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.238187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.238448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.238480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.238793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.238825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.239066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.239079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.239204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.239215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.239431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.239463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.239670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.239703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.239978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.240011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.240210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.240243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.240397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.240429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.240649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.240682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.240953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.240966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.241118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.241131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.241985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.242021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.242188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.242219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.242409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.242440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.242809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.242841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.243117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.243151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.243342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.243375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.243598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.243630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.243824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.243836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.244025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.244058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.244278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.244310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.244597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.244689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.244976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.245015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.245283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.245299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.245413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.245429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.245544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.531 [2024-12-07 10:10:20.245560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.531 qpair failed and we were unable to recover it. 00:35:51.531 [2024-12-07 10:10:20.245751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.245767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.245970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.245986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.246121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.246138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.246381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.246398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.246560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.246576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.246816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.246848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.247092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.247126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.247272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.247305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.247452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.247494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.247751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.247785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.248076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.248093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.248212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.248228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.248418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.248433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.248571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.248586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.248835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.248867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.249088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.249122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.249274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.249307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.532 [2024-12-07 10:10:20.249449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.532 [2024-12-07 10:10:20.249481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.532 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.249672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.249703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.249960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.249978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.250149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.250165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.250356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.250373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.250497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.250514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.250747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.250763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.250962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.250978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.252166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.252195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.252381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.252397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.252574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.252607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.252845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.252877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.253088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.253122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.253377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.253410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.253551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.253583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.253836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.253867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.254092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.254127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.254278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.801 [2024-12-07 10:10:20.254309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.801 qpair failed and we were unable to recover it. 00:35:51.801 [2024-12-07 10:10:20.254654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.254725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.254978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.255018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.255291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.255324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.255486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.255518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.255668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.255701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.255961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.255996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.256252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.256286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.256439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.256472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.256690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.256723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.256907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.256940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.257229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.257247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.257352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.257367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.257483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.257530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.257787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.257826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.258066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.258084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.258324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.258348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.258543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.258559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.258776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.258792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.258943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.258967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.259134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.259150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.259370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.259386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.259504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.259520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.259772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.259788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.259914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.259930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.260063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.260079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.260308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.260325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.260544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.260560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.260857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.802 [2024-12-07 10:10:20.260897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.802 qpair failed and we were unable to recover it. 00:35:51.802 [2024-12-07 10:10:20.261219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.261253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.261465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.261498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.261703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.261737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.261999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.262016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.262238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.262271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.262480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.262513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.262827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.262860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.263066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.263083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.263255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.263289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.263490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.263523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.263840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.263874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.264057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.264091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.264327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.264343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.264519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.264536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.264812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.264846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.265065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.265099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.265314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.265347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.265515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.265549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.265704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.265737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.266039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.266074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.266275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.266309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.266594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.266627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.266821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.266854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.267097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.267131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.267330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.267363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.267614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.267647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.267917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.267964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.803 [2024-12-07 10:10:20.268150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.803 [2024-12-07 10:10:20.268166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.803 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.268389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.268421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.268691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.268725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.268976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.268992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.269184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.269217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.269347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.269380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.269709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.269742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.270041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.270076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.270314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.270348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.270594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.270626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.270839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.270854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.271090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.271124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.271324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.271358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.271566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.271600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.271795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.271811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.271910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.271924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.272100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.272117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.272391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.272424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.272734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.272767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.273027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.273044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.273234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.273267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.273429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.273462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.273742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.273775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.274030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.274064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.274291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.274324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.274529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.274563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.274749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.274782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.275002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.275019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.275158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.804 [2024-12-07 10:10:20.275191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.804 qpair failed and we were unable to recover it. 00:35:51.804 [2024-12-07 10:10:20.275397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.275430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.275651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.275684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.275883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.275916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.276103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.276137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.276340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.276372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.276600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.276634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.276891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.276924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.277114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.277147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.277424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.277440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.277557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.277573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.277685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.277701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.277960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.277978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.278184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.278200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.278299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.278314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.278494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.278526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.278777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.278810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.278999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.279051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.279371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.279405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.279636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.279669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.279992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.280025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.280191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.280208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.280379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.280414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.280645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.280678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.280851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.280884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.281080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.281098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.281278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.281313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.281521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.281555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.805 qpair failed and we were unable to recover it. 00:35:51.805 [2024-12-07 10:10:20.281832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.805 [2024-12-07 10:10:20.281866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.282106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.282140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.282355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.282371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.282542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.282559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.282730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.282762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.283028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.283044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.283292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.283325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.283638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.283671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.283930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.283974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.284260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.284292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.284492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.284525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.284791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.284830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.285128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.285163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.285325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.285359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.285651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.285684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.285912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.285945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.286146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.286178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.286334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.286368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.286649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.286682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.286985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.287002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.287172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.287189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.287298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.287330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.287477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.287511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.287835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.287868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.288125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.288172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.288364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.288380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.288510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.288527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.288672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.288706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.288996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.289031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.806 [2024-12-07 10:10:20.289314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.806 [2024-12-07 10:10:20.289330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.806 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.289442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.289458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.289711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.289727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.289875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.289891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.289986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.290002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.290168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.290185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.290355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.290372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.290627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.290659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.290826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.290860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.291011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.291051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.291225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.291241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.291402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.291436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.291652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.291685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.291981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.292014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.292294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.292327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.292462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.292494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.292723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.292756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.293022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.293056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.293266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.293300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.293512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.293545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.293774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.293806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.294058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.294092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.294350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.807 [2024-12-07 10:10:20.294383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.807 qpair failed and we were unable to recover it. 00:35:51.807 [2024-12-07 10:10:20.294672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.294706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.294989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.295023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.295222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.295255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.295410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.295443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.295790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.295823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.296065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.296099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.296248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.296280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.296434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.296467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.296761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.296794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.297101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.297135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.297339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.297372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.297507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.297540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.297770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.297803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.298048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.298095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.298259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.298293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.298498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.298532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.298689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.298721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.298911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.298944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.299152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.299186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.299464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.299497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.299699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.299733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.299962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.299978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.300203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.300237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.300463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.300496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.300631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.300663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.300888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.300905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.301133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.301150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.808 [2024-12-07 10:10:20.301344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.808 [2024-12-07 10:10:20.301361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.808 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.301535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.301551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.301656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.301671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.301826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.301842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.302055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.302072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.302242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.302258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.302434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.302467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.302619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.302653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.302945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.302986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.303241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.303274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.303498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.303531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.303754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.303787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.304094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.304127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.304391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.304424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.304675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.304709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.304914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.304957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.305235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.305268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.305452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.305484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.305755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.305788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.306006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.306022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.306246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.306279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.306481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.306515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.306744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.306777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.306972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.306988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.307239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.307273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.307509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.307542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.307745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.307783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.307952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.307969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.308157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.809 [2024-12-07 10:10:20.308190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.809 qpair failed and we were unable to recover it. 00:35:51.809 [2024-12-07 10:10:20.308336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.308368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.308705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.308739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.309017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.309051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.309335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.309368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.309596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.309629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.309818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.309850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.310127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.310143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.310328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.310360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.310575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.310608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.310811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.310844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.311045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.311079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.311216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.311249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.311413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.311430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.311671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.311687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.311805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.311821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.312083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.312099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.312345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.312361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.312593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.312610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.312866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.312883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.313127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.313165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.313386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.313420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.313626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.313659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.313915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.313975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.314209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.314225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.314340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.314356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.314526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.314564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.314799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.314833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.315041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.810 [2024-12-07 10:10:20.315075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.810 qpair failed and we were unable to recover it. 00:35:51.810 [2024-12-07 10:10:20.315318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.315352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.315539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.315571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.315799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.315833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.316036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.316070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.316232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.316248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.316355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.316371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.316612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.316646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.316897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.316913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.317034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.317050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.317243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.317260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.317426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.317459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.317686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.317719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.317978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.318023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.318247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.318263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.318432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.318448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.318717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.318750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.318945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.318989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.319278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.319294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.319468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.319484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.319794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.319827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.320041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.320076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.320274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.320290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.320539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.320572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.320772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.320805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.320995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.321035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.321289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.321322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.321481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.321514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.321739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.321773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.811 [2024-12-07 10:10:20.321981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.811 [2024-12-07 10:10:20.321997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.811 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.322151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.322167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.322341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.322358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.322636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.322669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.322935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.323049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.323252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.323268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.323373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.323387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.323515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.323545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.323770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.323803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.323997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.324031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.324271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.324287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.324414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.324431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.324595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.324640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.324899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.324932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.325098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.325133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.325339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.325355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.325472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.325505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.325703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.325736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.325994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.326033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.326206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.326223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.326403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.326436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.326659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.326693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.326827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.326860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.327058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.327091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.327241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.327258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.327379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.327395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.327504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.327521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.327808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.327840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.327991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.328027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.328167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.812 [2024-12-07 10:10:20.328212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.812 qpair failed and we were unable to recover it. 00:35:51.812 [2024-12-07 10:10:20.328323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.328338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.328565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.328599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.328871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.328904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.329170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.329187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.329304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.329338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.329510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.329542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.329773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.329807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.330006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.330045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.330238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.330274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.330553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.330586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.330832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.330866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.331076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.331110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.331281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.331298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.331575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.331607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.331876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.331910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.332135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.332152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.332261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.332277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.332447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.332462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.332640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.332656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.332847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.332880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.333096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.333141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.333278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.333310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.813 qpair failed and we were unable to recover it. 00:35:51.813 [2024-12-07 10:10:20.333505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.813 [2024-12-07 10:10:20.333538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.333796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.333829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.334026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.334060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.334203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.334219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.334390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.334422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.334633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.334667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.334880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.334913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.335152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.335187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.335466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.335499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.335639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.335655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.335875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.335891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.336117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.336150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.336416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.336450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.336694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.336727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.337034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.337051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.337223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.337239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.337416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.337449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.337716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.337749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.337942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.338011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.338190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.338206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.338371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.338386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.338599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.338630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.338767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.338800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.339002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.339035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.339246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.339279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.339596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.339664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.339838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.339856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.339976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.814 [2024-12-07 10:10:20.340012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.814 qpair failed and we were unable to recover it. 00:35:51.814 [2024-12-07 10:10:20.340230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.340262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.340414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.340447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.340594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.340627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.340836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.340869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.341120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.341154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.341366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.341383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.341629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.341645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.341883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.341899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.342151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.342168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.342292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.342309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.342448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.342491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.342786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.342819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.343042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.343058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.343233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.343276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.343505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.343541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.343743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.343776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.344032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.344049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.344279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.344311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.344469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.344502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.344717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.344749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.345033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.345067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.345309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.345341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.345605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.345639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.345869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.345902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.346126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.346143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.346314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.346348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.346681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.346713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.346867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.346900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.347066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.815 [2024-12-07 10:10:20.347101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.815 qpair failed and we were unable to recover it. 00:35:51.815 [2024-12-07 10:10:20.347311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.347344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.347511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.347544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.347848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.347882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.348154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.348193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.348318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.348334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.348522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.348538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.348660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.348676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.348838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.348855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.349035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.349071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.349333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.349366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.349577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.349610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.349890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.349924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.350232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.350267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.350528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.350560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.350869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.350885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.351049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.351065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.351236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.351267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.351430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.351463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.351611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.351644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.351945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.351988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.352195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.352229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.352455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.352474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.352738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.352755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.352908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.352924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.353146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.353163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.353333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.353366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.353691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.353723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.353979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.354012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.354276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.354309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.816 qpair failed and we were unable to recover it. 00:35:51.816 [2024-12-07 10:10:20.354548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.816 [2024-12-07 10:10:20.354581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.354865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.354898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.355065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.355098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.355323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.355339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.355530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.355564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.355843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.355875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.356028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.356062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.356274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.356307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.356465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.356499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.356781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.356814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.357066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.357100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.357259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.357292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.357552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.357584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.357786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.357819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.357988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.358023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.358238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.358273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.358485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.358518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.358684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.358700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.358862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.358897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.359201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.359234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.359468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.359501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.359707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.359741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.360031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.360066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.360261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.360278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.360469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.360503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.360786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.360837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.361111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.361146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.817 [2024-12-07 10:10:20.361303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.817 [2024-12-07 10:10:20.361336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.817 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.361550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.361566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.361684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.361699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.361892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.361908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.362144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.362160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.362359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.362393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.362639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.362673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.362893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.362927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.363150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.363183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.363466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.363482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.363604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.363621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.363871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.363904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.364138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.364172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.364445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.364461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.364703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.364719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.364821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.364836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.365083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.365100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.365282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.365315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.365519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.365553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.365760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.365794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.366005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.366041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.366303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.366342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.366529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.366545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.366830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.366862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.367062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.367079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.367194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.367228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.367378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.367410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.367626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.367657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.367857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.367891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.368159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.818 [2024-12-07 10:10:20.368175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.818 qpair failed and we were unable to recover it. 00:35:51.818 [2024-12-07 10:10:20.368351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.368367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.368547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.368581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.368724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.368763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.368981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.369015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.369188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.369205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.369382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.369415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.369663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.369698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.369988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.370023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.370307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.370340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.370481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.370497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.370721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.370753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.370985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.371019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.371295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.371328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.371543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.371577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.371839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.371879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.372058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.372075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.372248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.372265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.372390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.372406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.372647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.372663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.372835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.372869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.373077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.373094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.373345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.373361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.373474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.373491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.373741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.819 [2024-12-07 10:10:20.373757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.819 qpair failed and we were unable to recover it. 00:35:51.819 [2024-12-07 10:10:20.373919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.373936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.374168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.374201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.374364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.374398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.374545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.374579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.374793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.374827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.375076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.375093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.375249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.375267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.375443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.375476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.375708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.375742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.375912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.375944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.376227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.376260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.376462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.376494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.376701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.376734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.376890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.376923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.377141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.377176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.377434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.377467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.377712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.377746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.378033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.378067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.378262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.378282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.378442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.378459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.378577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.378593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.378733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.378750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.378864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.378880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.379130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.379146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.379324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.379340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.379515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.379548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.379877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.379909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.380163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.380196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.820 qpair failed and we were unable to recover it. 00:35:51.820 [2024-12-07 10:10:20.380337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.820 [2024-12-07 10:10:20.380371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.380666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.380698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.380972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.381016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.381149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.381166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.381370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.381404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.381552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.381584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.381814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.381848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.382084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.382119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.382279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.382312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.382577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.382610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.382824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.382857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.383109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.383126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.383236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.383270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.383530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.383563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.383842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.383875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.384097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.384131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.384344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.384376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.384619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.384653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.384804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.384837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.385104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.385137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.385402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.385434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.385656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.385689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.385978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.386014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.386222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.386254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.386491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.386525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.386750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.386786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.387092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.387127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.387324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.387358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.821 qpair failed and we were unable to recover it. 00:35:51.821 [2024-12-07 10:10:20.387618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.821 [2024-12-07 10:10:20.387650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.387802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.387837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.388067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.388110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.388375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.388391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.388520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.388536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.388809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.388841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.389116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.389151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.389384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.389400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.389594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.389610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.389726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.389743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.389906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.389922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.390190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.390225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.390446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.390479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.390771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.390804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.391068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.391103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.391328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.391361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.391584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.391618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.391861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.391894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.392106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2166f30 is same with the state(6) to be set 00:35:51.822 [2024-12-07 10:10:20.392445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.392487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.392633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.392652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.392910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.392943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.393128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.393161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.393428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.393461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.393708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.393740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.393964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.393999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.394114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.394131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.394302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.394318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.394523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.822 [2024-12-07 10:10:20.394557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.822 qpair failed and we were unable to recover it. 00:35:51.822 [2024-12-07 10:10:20.394849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.394892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.395192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.395226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.395445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.395478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.395637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.395670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.395887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.395921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.396080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.396113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.396346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.396363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.396537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.396570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.396776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.396808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.397089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.397122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.397367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.397400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.397733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.397766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.398054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.398097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.398193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.398208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.398466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.398500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.398761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.398793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.399059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.399092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.399333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.399365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.399607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.399639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.399828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.399844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.400084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.400101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.400229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.400262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.400413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.400445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.400681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.400715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.400867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.400900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.401056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.401091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.401321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.401354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.401717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.401790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.823 [2024-12-07 10:10:20.402049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.823 [2024-12-07 10:10:20.402064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.823 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.402253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.402288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.402448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.402480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.402754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.402788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.403009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.403044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.403291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.403323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.403505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.403518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.403694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.403707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.403880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.403914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.404186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.404220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.404389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.404421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.404693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.404725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.404963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.405007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.405233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.405267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.405425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.405460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.405722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.405735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.405904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.405917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.406175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.406209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.406436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.406469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.406729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.406761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.407047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.407082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.407317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.407354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.407560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.407591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.407785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.407818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.408027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.408061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.408269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.408282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.408493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.408526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.408669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.408701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.824 [2024-12-07 10:10:20.408899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.824 [2024-12-07 10:10:20.408931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.824 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.409187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.409233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.409409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.409421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.409526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.409537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.409757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.409789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.409995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.410029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.410236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.410269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.410595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.410628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.410897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.410930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.411157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.411191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.411394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.411406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.411628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.411660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.411863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.411896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.412179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.412212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.412416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.412448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.412697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.412710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.412895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.412906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.413085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.413119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.413383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.413416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.413703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.413737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.413886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.413919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.414223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.414257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.414473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.414486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.414733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.414765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.414976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.415017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.415281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.415314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.415506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.415518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.825 qpair failed and we were unable to recover it. 00:35:51.825 [2024-12-07 10:10:20.415781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.825 [2024-12-07 10:10:20.415793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.416026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.416060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.416274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.416287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.416485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.416518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.416817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.416849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.417007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.417051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.417268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.417280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.417430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.417443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.417557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.417569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.417694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.417721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.417969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.418003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.418223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.418256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.418449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.418461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.418627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.418640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.418850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.418863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.419126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.419167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.419381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.419400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.419618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.419652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.419928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.419981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.420191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.420226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.420379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.420413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.420576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.420610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.420748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.420781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.420997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.826 [2024-12-07 10:10:20.421033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.826 qpair failed and we were unable to recover it. 00:35:51.826 [2024-12-07 10:10:20.421326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.421363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.421664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.421697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.421967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.422000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.422214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.422248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.422484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.422516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.422721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.422753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.422968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.423002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.423212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.423225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.423396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.423429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.423662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.423696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.423851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.423884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.424198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.424232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.424395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.424431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.424649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.424665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.424883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.424895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.425075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.425088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.425250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.425262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.425379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.425392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.425521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.425553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.425766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.425799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.426004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.426038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.426255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.426289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.426443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.426477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.427730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.427756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.428009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.428045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.428336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.428369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.428522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.428534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.428839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.428872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.429131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.429164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.827 [2024-12-07 10:10:20.429368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.827 [2024-12-07 10:10:20.429400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.827 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.429626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.429659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.429864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.429897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.430123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.430157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.430417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.430449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.430646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.430658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.430902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.430934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.431204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.431236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.431374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.431406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.431723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.431735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.431929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.431941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.432141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.432157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.432270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.432283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.432474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.432486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.432641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.432654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.433409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.433434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.433641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.433654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.433900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.433935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.434905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.434929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.435161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.435175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.435301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.435335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.435551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.435584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.435834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.435867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.436088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.436124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.436340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.436373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.436653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.436666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.436762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.436774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.436963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.436998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.437300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.437333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.437542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.828 [2024-12-07 10:10:20.437576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.828 qpair failed and we were unable to recover it. 00:35:51.828 [2024-12-07 10:10:20.437785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.437818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.438082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.438117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.438315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.438327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.438600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.438613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.438850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.438862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.439028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.439040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.439147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.439159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.439351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.439364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.439588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.439602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.439734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.439767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.439970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.440004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.440198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.440232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.440496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.440509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.440731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.440765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.441085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.441118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.441379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.441412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.441617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.441649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.441814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.441848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.442106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.442141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.442363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.442397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.442596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.442608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.442836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.442875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.443028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.443062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.443255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.443288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.443520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.443533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.443789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.443802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.444044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.444078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.444361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.444394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.444652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.444664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.444848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.444860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.445024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.445058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.445261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.829 [2024-12-07 10:10:20.445294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.829 qpair failed and we were unable to recover it. 00:35:51.829 [2024-12-07 10:10:20.445455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.445488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.445786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.445819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.445977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.446011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.446291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.446326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.446630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.446663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.446863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.446896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.447078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.447113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.447269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.447301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.447507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.447519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.447705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.447737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.448005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.448039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.448338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.448370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.448505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.448538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.448801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.448834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.449062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.449095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.449224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.449257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.449430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.449443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.449668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.449700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.449981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.450015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.450157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.450192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.450394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.450406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.450656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.450668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.450926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.450939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.451089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.451102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.451246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.451259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.451409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.451422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.451578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.451591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.451815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.451848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.452076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.452112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.830 qpair failed and we were unable to recover it. 00:35:51.830 [2024-12-07 10:10:20.452326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.830 [2024-12-07 10:10:20.452366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.452625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.452658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.452945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.452986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.453203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.453237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.453445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.453458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.453657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.453690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.453826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.453859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.454136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.454169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.454354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.454367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.454473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.454512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.454734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.454768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.455031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.455076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.455193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.455205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.455372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.455385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.455534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.455547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.455714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.455726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.455837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.455849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.456057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.456091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.456312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.456345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.456527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.456539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.456786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.456819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.456988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.457022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.457220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.457253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.457386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.457419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.457577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.831 [2024-12-07 10:10:20.457609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.831 qpair failed and we were unable to recover it. 00:35:51.831 [2024-12-07 10:10:20.457749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.457781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.458020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.458053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.458360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.458373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.458599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.458610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.458805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.458817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.459039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.459052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.459252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.459285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.459544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.459576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.459726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.459759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.460001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.460030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.460148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.460178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.460405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.460438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.460689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.460721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.460913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.460946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.461160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.461194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.461407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.461421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.461586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.461620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.461846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.461879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.462036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.462071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.462298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.462321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.462543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.462576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.462816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.462850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.463086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.463120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.463331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.463365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.463675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.463709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.463978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.464012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.464245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.464257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.464441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.464473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.832 qpair failed and we were unable to recover it. 00:35:51.832 [2024-12-07 10:10:20.464731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.832 [2024-12-07 10:10:20.464763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.464986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.465021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.465281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.465313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.465526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.465559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.465772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.465805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.466024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.466059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.466331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.466343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.466558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.466570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.466708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.466721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.466902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.466935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.467206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.467239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.467449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.467462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.467687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.467720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.467916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.467956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.468104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.468138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.468432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.468464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.468699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.468733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.469022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.469055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.469320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.469353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.469510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.469542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.469806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.469839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.470099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.470112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.470346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.470375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.470593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.470606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.470768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.470781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.470897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.470910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.471107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.471120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.471235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.833 [2024-12-07 10:10:20.471275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.833 qpair failed and we were unable to recover it. 00:35:51.833 [2024-12-07 10:10:20.471553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.471585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.471847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.471898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.472197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.472231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.472457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.472490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.472742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.472775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.473087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.473121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.473347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.473381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.473575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.473587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.473778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.473811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.474095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.474131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.474337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.474350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.474451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.474473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.474642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.474654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.474811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.474843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.475004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.475039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.475324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.475363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.475533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.475545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.475719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.475732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.476002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.476015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.476138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.476150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.476389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.476402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.476567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.476580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.476795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.476829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.477061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.477095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.477355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.477367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.477535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.477568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.477842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.477877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.478096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.834 [2024-12-07 10:10:20.478130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.834 qpair failed and we were unable to recover it. 00:35:51.834 [2024-12-07 10:10:20.478345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.478377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.478607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.478641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.478929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.478973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.479250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.479283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.479444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.479477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.479783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.479818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.480024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.480061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.480280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.480314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.480454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.480488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.480723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.480757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.480903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.480959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.481262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.481314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.481473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.481508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.481729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.481762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.481964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.482000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.482198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.482213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.482312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.482323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.482490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.482527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.482773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.482811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.482969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.483005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.483230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.483254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.483362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.483373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.483581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.483593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.483819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.483831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.484031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.484046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.484147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.484159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.484374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.484409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.484638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.484672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.484959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.485002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.485219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.485253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.835 [2024-12-07 10:10:20.485505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.835 [2024-12-07 10:10:20.485539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.835 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.485836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.485850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.486108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.486122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.486214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.486226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.486397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.486431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.486779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.486814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.486994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.487027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.487258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.487294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.487464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.487498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.487764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.487777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.487951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.487964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.488187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.488221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.488448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.488462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.488572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.488586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.488693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.488706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.488926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.488939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.489279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.489293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.489436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.489448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.489562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.489575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.489682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.489694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.489923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.489936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.490165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.490181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.490461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.490475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.490576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.490589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.490813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.490826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.491072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.491106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.491400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.491436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.491585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.491625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.836 qpair failed and we were unable to recover it. 00:35:51.836 [2024-12-07 10:10:20.491815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.836 [2024-12-07 10:10:20.491828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.492063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.492099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.492244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.492280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.492510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.492543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.493498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.493526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.493819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.493833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.494049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.494087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.494241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.494274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.494499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.494531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.494793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.494808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.495117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.495130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.495350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.495365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.495534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.495547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.495791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.495805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.495902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.495915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.496124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.496159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.496333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.496367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.496558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.496590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.496748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.496784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.497108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.497148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.497452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.497485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.497715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.497750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.497995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.498031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.498323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.498338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.498496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.498508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.498688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.498701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.837 [2024-12-07 10:10:20.498870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.837 [2024-12-07 10:10:20.498903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.837 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.499155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.499191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.499394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.499428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.499578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.499612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.499835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.499869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.500023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.500068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.500309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.500345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.500514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.500561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.500815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.500828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.501006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.501037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.501189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.501202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.501365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.501379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.501492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.501506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.501705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.501741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.502011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.502046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.502186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.502223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.502369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.502403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.502709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.502744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.502935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.502979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.503184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.503219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.503425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.503459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.503748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.503783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.503944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.503987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.504249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.504262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.504490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.504525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.838 [2024-12-07 10:10:20.504770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.838 [2024-12-07 10:10:20.504803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.838 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.504995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.505030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.505245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.505282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.505449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.505482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.505626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.505640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.505813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.505826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.505920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.505932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.506161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.506234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.506403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.506439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.506624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.506660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.506881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.506917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.507222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.507256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.507410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.507446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.507706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.507741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.507980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.508017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.508146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.508180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.508346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.508382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.508554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.508587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.508829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.508864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.509061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.509097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.509314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.509348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.509558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.509592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.509792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.509835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.510075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.510115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.510283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.510333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.510599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.510639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.510908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.510943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.511082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.511096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.511287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.511299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.511398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.511409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.839 qpair failed and we were unable to recover it. 00:35:51.839 [2024-12-07 10:10:20.511647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.839 [2024-12-07 10:10:20.511681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.511917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.511960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.512102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.512135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.512285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.512318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.512532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.512566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.512857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.512870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.513042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.513056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.513154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.513166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.513285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.513297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.513462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.513496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.513715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.513748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.513964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.514000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.514154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.514189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.514333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.514347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.514615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.514628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.514730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.514743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.514913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.514928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:51.840 [2024-12-07 10:10:20.515043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.840 [2024-12-07 10:10:20.515055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:51.840 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.515251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.515266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.515491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.515506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.515686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.515699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.515863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.515877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.516118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.516132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.516372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.516386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.516583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.516596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.516767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.516782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.517012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.517026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.517214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.517227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.517383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.517396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.517658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.517702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.517879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.517913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.518142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.518177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.518400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.518441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.518758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.518792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.518980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.518996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.519176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.519189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.519365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.519398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.519558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.519593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.519801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.519835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.520052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.520086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.520243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.520278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.520434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.520467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.520685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.520719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.520848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.520882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.521049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.521063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.521189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.521202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.116 qpair failed and we were unable to recover it. 00:35:52.116 [2024-12-07 10:10:20.521364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.116 [2024-12-07 10:10:20.521377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.521499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.521512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.521613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.521626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.521816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.521850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.522056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.522093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.522312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.522346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.522558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.522592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.522747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.522761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.522974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.523010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.523165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.523199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.524223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.524252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.524363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.524376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.524491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.524503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.524741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.524754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.526090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.526119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.526249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.526262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.526370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.526383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.526511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.526524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.526622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.526636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.526746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.526758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.526929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.526941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.527146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.527159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.527297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.527332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.527546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.527581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.527739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.527771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.527992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.528007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.528189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.528208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.528326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.528339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.528472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.528485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.528730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.528764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.528970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.529005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.529258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.529293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.529537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.529574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.529823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.529837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.530037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.530072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.530211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.530246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.530500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.530514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.530701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.530716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.530830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.530843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.531015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.531029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.531209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.531225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.531393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.531407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.531667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.531680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.531864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.531878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.532047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.532082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.532321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.532358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.532570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.532603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.532801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.532816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.533021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.533058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.533315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.533349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.533569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.533602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.534802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.534831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.117 [2024-12-07 10:10:20.534987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.117 [2024-12-07 10:10:20.535000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.117 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.535139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.535152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.535354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.535365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.535483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.535518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.535794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.535828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.536087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.536123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.536388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.536403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.536508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.536521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.536718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.536731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.536903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.536917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.537124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.537138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.537281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.537295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.537474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.537508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.537823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.537857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.538109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.538150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.538303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.538340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.538511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.538551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.538736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.538763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.538962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.538996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.539135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.539149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.539358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.539395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.539694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.539709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.540573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.540602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.540867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.540880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.541034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.541049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.541205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.541241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.541536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.541571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.541720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.541734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.541972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.542009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.542235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.542269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.542466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.542500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.542680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.542714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.542997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.543032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.543256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.543271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.543374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.543386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.544388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.544423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.544573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.544589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.544859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.544873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.545100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.545115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.545903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.545933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.546133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.546147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.546378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.546415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.546587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.546623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.546764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.546802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.547085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.547120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.547321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.547356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.547537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.547572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.547837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.547871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.548064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.548101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.548340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.548376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.548577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.548612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.548775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.118 [2024-12-07 10:10:20.548811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.118 qpair failed and we were unable to recover it. 00:35:52.118 [2024-12-07 10:10:20.549036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.549073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.549357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.549397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.549572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.549620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.549835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.549848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.550003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.550041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.550276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.550313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.550579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.550592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.551632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.551662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.551938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.551969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.552107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.552122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.552358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.552392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.552559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.552597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.552920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.552970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.553246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.553280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.553513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.553549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.553786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.553799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.554021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.554037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.554213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.554246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.554482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.554518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.554735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.554779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.554891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.554905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.555098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.555112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.555257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.555272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.555432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.555445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.555540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.555552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.555847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.555880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.556074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.556108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.556266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.556299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.556520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.556556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.556788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.556822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.557047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.557084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.557321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.557355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.557581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.557617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.557878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.557891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.558062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.558121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.558285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.558321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.558607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.558642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.558771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.558785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.558963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.558978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.559106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.559120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.559297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.559311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.559418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.559432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.559555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.559568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.559706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.559720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.559818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.559830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.560034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.119 [2024-12-07 10:10:20.560048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.119 qpair failed and we were unable to recover it. 00:35:52.119 [2024-12-07 10:10:20.560218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.560232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.560340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.560352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.560453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.560465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.560692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.560706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.560811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.560824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.561062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.561076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.561245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.561260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.561372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.561384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.561551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.561565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.561643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.561656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.561774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.561787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.561960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.561974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.562151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.562165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.562273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.562286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.562390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.562403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.562489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.562502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.562602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.562614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.562728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.562743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.562858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.562872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.562990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.563002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.563113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.563129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.563216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.563228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.563458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.563473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.563562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.563575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.563693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.563705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.563787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.563799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.563886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.563898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.564020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.564033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.564136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.564148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.564316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.564330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.564497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.564510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.564665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.564679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.564766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.564779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.564937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.564957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.565057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.565069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.565170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.565183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.565301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.565316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.565483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.565498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.565652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.565666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.565764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.565776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.565849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.565862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.565971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.565984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.566064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.566076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.566244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.566259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.566359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.566372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.566573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.566588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.566691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.566703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.566806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.566818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.566909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.566920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.567085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.567098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.567207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.567219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.120 [2024-12-07 10:10:20.567407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.120 [2024-12-07 10:10:20.567421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.120 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.567523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.567535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.567618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.567632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.567757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.567769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.567865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.567877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.568065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.568080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.568175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.568187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.568282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.568295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.568372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.568385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.568469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.568481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.568574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.568587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.568688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.568700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.568794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.568808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.568915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.568928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.569034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.569048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.569214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.569226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.569299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.569314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.569467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.569481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.569545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.569557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.569708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.569721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.569825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.569838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.569903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.569915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.570021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.570035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.570127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.570140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.570224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.570236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.570392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.570408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.570505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.570518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.570624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.570636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.570733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.570744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.570966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.570980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.571073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.571085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.571242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.571256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.571430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.571443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.571530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.571543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.571655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.571667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.571818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.571833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.572002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.572017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.572187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.572200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.572284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.572298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.572400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.572413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.572501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.572515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.572606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.572621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.572704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.572718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.572801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.572815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.572974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.572989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.573092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.573107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.573202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.573215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.573392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.573406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.573532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.573546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.573659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.573672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.573782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.121 [2024-12-07 10:10:20.573795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.121 qpair failed and we were unable to recover it. 00:35:52.121 [2024-12-07 10:10:20.573880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.573892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.574072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.574088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.574243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.574256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.574337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.574350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.574443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.574458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.574598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.574611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.574832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.574848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.575051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.575067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.575177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.575192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.575286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.575299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.575395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.575411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.575641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.575657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.575743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.575757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.575848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.575862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.575946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.575965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.576077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.576091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.576243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.576258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.576346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.576359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.576435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.576447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.576522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.576535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.576700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.576714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.576792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.576803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.576959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.576973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.577127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.577142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.577253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.577267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.577363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.577376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.577481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.577495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.577611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.577624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.577713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.577726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.577841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.577855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.578023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.578038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.578138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.578151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.578304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.578319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.578404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.578417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.578516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.578529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.578639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.578652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.578741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.578756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.578862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.578876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.579095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.579109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.579207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.579220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.579316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.579329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.579434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.579450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.579566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.579581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.579669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.579683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.579854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.579867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.580024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.580038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.580129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.580143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.580246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.580262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.580367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.580379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.580489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.580502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.580604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.580618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.580714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.580726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.580807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.580821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.580985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.122 [2024-12-07 10:10:20.580998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.122 qpair failed and we were unable to recover it. 00:35:52.122 [2024-12-07 10:10:20.581088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.581100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.581273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.581287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.581438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.581452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.581532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.581545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.581670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.581689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.581801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.581816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.581910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.581925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.582030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.582060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.582313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.582326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.582477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.582490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.582678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.582691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.582874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.582888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.583149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.583162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.583247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.583260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.583504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.583518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.583785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.583798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.584045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.584060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.584204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.584216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.584314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.584327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.584499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.584511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.584687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.584701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.584933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.584952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.585224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.585239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.585365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.585379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.585602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.585614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.585771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.585785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.585942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.585984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.586248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.586265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.586375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.586406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.586586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.586600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.586817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.586830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.587001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.587016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.587168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.587192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.587413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.587425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.587601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.587614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.587710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.587724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.587952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.587965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.588086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.588099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.588191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.588204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.588302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.588315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.588461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.588474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.588564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.588577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.588775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.588790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.588941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.588958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.589062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.589074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.589244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.589257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.123 [2024-12-07 10:10:20.589405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.123 [2024-12-07 10:10:20.589418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.123 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.589510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.589524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.589753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.589765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.589985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.589998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.590110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.590136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.590333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.590366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.590589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.590623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.590841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.590877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.591023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.591059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.591292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.591328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.591630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.591644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.591790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.591811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.592059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.592094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.592252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.592287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.592500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.592533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.592725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.592759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.593060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.593094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.593251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.593264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.593482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.593515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.593793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.593826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.594049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.594082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.594289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.594333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.594497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.594544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.594629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.594643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.594774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.594787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.594974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.595010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.595200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.595234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.595481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.595515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.595810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.595842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.596050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.596087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.596322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.596358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.596496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.596528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.596850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.596883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.597192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.597227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.597382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.597418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.597649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.597661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.597785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.597817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.598097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.598131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.598268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.598301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.598433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.598466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.598688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.598723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.598922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.598935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.599057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.599071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.599226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.599240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.599405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.599438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.599754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.599787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.600087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.600099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.600254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.600268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.600429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.600463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.600611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.600646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.600867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.600914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.601084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.601097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.601292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.601327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.601551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.124 [2024-12-07 10:10:20.601584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.124 qpair failed and we were unable to recover it. 00:35:52.124 [2024-12-07 10:10:20.601748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.601782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.601942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.602023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.602237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.602274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.602434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.602468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.602680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.602713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.602911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.602942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.603150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.603163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.603391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.603433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.603681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.603714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.603944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.603987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.604253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.604287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.604566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.604600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.604896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.604937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.605169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.605184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.605296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.605310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.605488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.605502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.605687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.605699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.605860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.605873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.606139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.606174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.606438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.606486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.606680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.606693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.606859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.606873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.606964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.606976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.607075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.607087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.607242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.607256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.607440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.607455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.607719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.607734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.607850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.607883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.608201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.608238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.608454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.608490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.608793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.608826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.608995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.609037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.609172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.609205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.609401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.609438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.609714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.609790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.610112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.610155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.610421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.610466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.610732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.610747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.610861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.610875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.611094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.611109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.611266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.611280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.611445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.611460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.611623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.611637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.611836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.611870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.612082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.612116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.612353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.612389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.612693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.612735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.612972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.613014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.613185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.613220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.613382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.613416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.613556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.613590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.613794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.613841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.613956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.125 [2024-12-07 10:10:20.613971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.125 qpair failed and we were unable to recover it. 00:35:52.125 [2024-12-07 10:10:20.614186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.614225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.614440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.614477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.614697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.614731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.614930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.614996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.615150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.615184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.615393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.615428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.615570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.615606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.615832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.615874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.616142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.616190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.616418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.616454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.616675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.616692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.616928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.616971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.617122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.617157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.617353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.617387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.617595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.617629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.617789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.617822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.618055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.618091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.618233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.618266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.618426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.618461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.618614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.618631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.618880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.618916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.619141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.619186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.619315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.619349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.619544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.619580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.619704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.619738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.619880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.619914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.620160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.620204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.620435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.620470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.620600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.620637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.620851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.620886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.621025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.621042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.621139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.621153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.621276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.621292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.621455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.621472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.621665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.621681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.621783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.621800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.622041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.622080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.622223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.622256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.622414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.622450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.622598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.622633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.622831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.622864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.623073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.623108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.623275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.623327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.623619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.623656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.623870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.623905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.624145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.624183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.624394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.624427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.624668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.624703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.624942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.624988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.625246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.625263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.625361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.625373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.625655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.625689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.126 qpair failed and we were unable to recover it. 00:35:52.126 [2024-12-07 10:10:20.625841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.126 [2024-12-07 10:10:20.625875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.626026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.626061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.626272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.626305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.626508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.626541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.626740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.626752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.627020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.627034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.627244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.627278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.627412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.627446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.627648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.627682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.627900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.627942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.628189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.628222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.628439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.628474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.628624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.628637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.628814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.628847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.629066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.629101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.629223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.629255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.629456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.629490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.629642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.629655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.629850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.629883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.630016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.630051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.630260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.630294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.630519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.630553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.630689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.630702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.630909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.630944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.631148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.631181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.631399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.631434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.631633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.631667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.631862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.631896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.632070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.632105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.632331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.632364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.632508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.632542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.632772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.632805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.633028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.633064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.633261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.633294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.633499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.633532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.633793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.633827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.633985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.633999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.634143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.634156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.634315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.634348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.634627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.634662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.634889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.634922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.635070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.635106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.635234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.635267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.635481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.635515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.635723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.635758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.635969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.636003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.636304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.636317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.636431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.636444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.636603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.636637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.636895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.636939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.637153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.637166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.637275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.637286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.637435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.637448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.637614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.637626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.637805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.637817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.637905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.637918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.638025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.638036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.638263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.638299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.638444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.638477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.638626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.638660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.638803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.638816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.638968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.638982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.639166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.639179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.639259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.639271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.639440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.639453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.639547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.639561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.639721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.639735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.639973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.639987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.640060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.640073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.640252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.640265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.640385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.640418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.127 [2024-12-07 10:10:20.640556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.127 [2024-12-07 10:10:20.640589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.127 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.640825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.640860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.641030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.641043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.641213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.641248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.641442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.641478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.641682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.641723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.641937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.641986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.642139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.642172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.642392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.642427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.642713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.642747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.642976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.643015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.643237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.643249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.643421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.643452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.643571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.643603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.643817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.643847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.644048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.644058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.644281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.644311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.644530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.644563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.644784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.644832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.644970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.645002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.645192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.645201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.645313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.645324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.645464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.645474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.645564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.645574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.645668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.645679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.645787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.645816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.646008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.646082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.646234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.646270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.646415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.646428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.646519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.646534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.646702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.646718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.646927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.646979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.647209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.647241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.647365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.647396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.647591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.647622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.647763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.647795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.647935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.647985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.648132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.648164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.648296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.648326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.648444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.648475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.648738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.648771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.648895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.648927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.649148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.649163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.649263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.649277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.649450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.649466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.649558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.649575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.649666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.649682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.649855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.649869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.650034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.650050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.650232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.650249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.650479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.650510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.650712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.650742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.650939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.650960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.651148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.651178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.651389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.651421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.651615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.651649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.651789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.651822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.652036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.652079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.652237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.652254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.652414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.652430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.652612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.652628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.652788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.652804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.653023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.653058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.653275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.653308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.653465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.653497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.128 [2024-12-07 10:10:20.653697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.128 [2024-12-07 10:10:20.653730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.128 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.653878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.653912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.654110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.654128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.654249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.654267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.654427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.654444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.654543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.654559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.654658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.654678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.654793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.654807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.654900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.654914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.655084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.655118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.655260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.655293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.655522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.655558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.655767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.655780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.655865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.655878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.655987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.656001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.656104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.656117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.656283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.656297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.656519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.656551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.656747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.656781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.656967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.657008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.657120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.657133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.657241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.657254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.657365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.657377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.657456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.657469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.657641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.657654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.657799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.657812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.657968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.657981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.658131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.658143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.658236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.658248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.658372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.658406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.658634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.658667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.658786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.658819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.658945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.658990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.659128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.659147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.659249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.659272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.659378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.659394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.659484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.659501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.659603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.659619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.659782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.659799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.659983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.659998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.660101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.660115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.660257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.660270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.660354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.660365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.660517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.660530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.660630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.660644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.660823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.660855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.660998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.661036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.661164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.661197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.661406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.661440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.661576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.661608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.661745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.661779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.661926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.661981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.662209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.662222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.662300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.662312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.662514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.662549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.662770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.662804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.662989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.663022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.663225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.663237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.663409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.663444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.663618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.663661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.663878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.663913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.664171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.664245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.664399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.664436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.664650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.664684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.129 [2024-12-07 10:10:20.664955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.129 [2024-12-07 10:10:20.664973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.129 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.665132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.665148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.665258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.665290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.665492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.665526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.665722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.665755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.665987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.666022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.666212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.666244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.666526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.666560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.666842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.666875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.667086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.667121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.667309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.667350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.667555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.667589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.667802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.667834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.667975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.667992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.668241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.668272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.668476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.668508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.668706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.668740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.668960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.669006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.669154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.669187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.669337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.669372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.669587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.669619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.669926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.669978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.670127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.670161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.670463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.670500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.670780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.670818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.671011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.671029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.671263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.671295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.671509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.671543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.671750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.671786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.672046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.672064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.672171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.672188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.672359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.672394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.672654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.672688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.672898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.672931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.673064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.673098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.673303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.673336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.673544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.673579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.673723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.673756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.673977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.674014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.674273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.674289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.674411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.674443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.674580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.674611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.674796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.674813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.675002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.675037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.675314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.675347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.675481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.675515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.675717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.675751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.675974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.676009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.676140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.676172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.676373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.676406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.676663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.676702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.676902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.676918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.677020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.677065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.677295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.677329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.677586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.677620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.677824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.677857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.677998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.678032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.678232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.678264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.678491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.678523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.678715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.678748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.678973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.679005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.679198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.679238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.679438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.679455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.679562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.679579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.679659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.679674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.679861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.679893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.680061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.680095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.680287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.680320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.680460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.680491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.680631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.680663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.680889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.130 [2024-12-07 10:10:20.680922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.130 qpair failed and we were unable to recover it. 00:35:52.130 [2024-12-07 10:10:20.681226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.681259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.681471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.681504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.681717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.681750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.681917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.681933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.682180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.682196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.682376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.682394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.682553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.682569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.682741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.682758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.682932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.682959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.683053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.683067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.683284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.683301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.683396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.683411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.683577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.683610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.683746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.683777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.683916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.683963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.684222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.684256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.684383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.684415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.684608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.684642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.684804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.684837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.684988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.685028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.685230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.685263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.685434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.685450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.685607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.685623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.685735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.685750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.685992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.686009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.686158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.686174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.686285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.686301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.686387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.686401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.686569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.686601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.686733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.686766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.686999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.687032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.687288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.687321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.687497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.687571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.687736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.687774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.687937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.687986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.688178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.688194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.688414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.688447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.688710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.688754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.689066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.689090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.689264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.689285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.689459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.689480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.689601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.689619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.689789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.689807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.689990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.690023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.690227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.690260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.690460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.690493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.690688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.690759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.690966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.691016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.691125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.691141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.691228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.691243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.691418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.691434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.691603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.691638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.691839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.691871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.692081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.692098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.692260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.692278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.692489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.692505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.692600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.692614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.692793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.692810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.692980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.692998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.693167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.693206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.693425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.693459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.131 [2024-12-07 10:10:20.693593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.131 [2024-12-07 10:10:20.693626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.131 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.693808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.693825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.693987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.694023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.694214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.694248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.694377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.694409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.694607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.694642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.694897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.694930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.695149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.695196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.695314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.695348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.695498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.695532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.695727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.695760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.695965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.696000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.696125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.696141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.696374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.696390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.696564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.696579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.696690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.696706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.696794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.696809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.696970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.696988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.697147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.697164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.697248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.697262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.697378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.697396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.697587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.697629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.697763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.697796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.697932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.697975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.698114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.698147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.698276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.698305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.698484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.698523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.698664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.698701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.698969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.699017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.699161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.699172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.699255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.699266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.699425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.699437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.699687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.699721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.699862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.699882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.699980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.699996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.700106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.700123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.700222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.700237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.700352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.700368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.700517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.700536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.700640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.700654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.700820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.700836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.701050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.701097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.701294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.132 [2024-12-07 10:10:20.701328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.132 qpair failed and we were unable to recover it. 00:35:52.132 [2024-12-07 10:10:20.701436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.701467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.701728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.701761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.701965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.702000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.702226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.702259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.702538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.702573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.702763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.702796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.702995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.703030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.703261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.703278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.703386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.703420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.703629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.703663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.703860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.703900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.704065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.704081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.704235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.704268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.704390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.704423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.704702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.704736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.704975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.704992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.705076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.705091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.705245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.705261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.705377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.705393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.705664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.705697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.705902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.705935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.706169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.706203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.706461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.706477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.706578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.706594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.706769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.706804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.707002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.707036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.707234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.707269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.707498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.707531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.707684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.707717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.707911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.707957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.708158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.708192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.708451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.708484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.708691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.708725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.708931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.708974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.709260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.709293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.709498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.709538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.709694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.709728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.709960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.709976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.710064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.710079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.710344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.710377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.710572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.710605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.710747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.710779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.710923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.133 [2024-12-07 10:10:20.710967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.133 qpair failed and we were unable to recover it. 00:35:52.133 [2024-12-07 10:10:20.711161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.711195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.711388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.711421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.711557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.711573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.711679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.711695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.711805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.711821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.712019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.712053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.712247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.712281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.712519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.712554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.712759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.712792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.712984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.713002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.713112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.713128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.713282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.713298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.713475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.713508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.713705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.713737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.713957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.713991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.714227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.714243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.714410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.714426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.714528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.714560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.714762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.714796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.715079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.715099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.715318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.715334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.715492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.715508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.715606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.715621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.715862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.715878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.715973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.715987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.716262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.716295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.716484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.716516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.716718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.716752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.716898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.716932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.717189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.717206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.717353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.717390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.717516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.717549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.717755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.717788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.718010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.718031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.718153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.718171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.718266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.718281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.718366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.718410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.718717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.718751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.718882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.718914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.719064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.719079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.719233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.719249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.719467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.719483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.719582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.719596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.719699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.719714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.134 qpair failed and we were unable to recover it. 00:35:52.134 [2024-12-07 10:10:20.719803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.134 [2024-12-07 10:10:20.719819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.719969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.719986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.720148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.720165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.720341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.720356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.720541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.720573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.720856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.720889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.721162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.721201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.721371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.721388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.721491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.721505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.721735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.721769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.721975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.722009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.722136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.722168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.722371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.722387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.722633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.722666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.722813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.722845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.723058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.723098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.723299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.723315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.723431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.723464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.723719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.723752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.723961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.723997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.724205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.724238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.724536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.724570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.724701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.724734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.724931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.724974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.725213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.725247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.725503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.725536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.725728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.725744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.725973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.726007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.726161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.726194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.726466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.726500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.726663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.726694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.726984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.727019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.727242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.727275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.727429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.727463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.727677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.727711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.727939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.727981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.728193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.728227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.728355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.728387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.728681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.728716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.728973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.729008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.729203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.729245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.729409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.729425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.729614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.729647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.729855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.729888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.730086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.135 [2024-12-07 10:10:20.730120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.135 qpair failed and we were unable to recover it. 00:35:52.135 [2024-12-07 10:10:20.730241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.730257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.730429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.730461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.730597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.730629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.730833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.730867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.731098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.731115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.731237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.731270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.731510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.731542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.731799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.731832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.732021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.732038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.732208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.732243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.732380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.732419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.732646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.732679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.732831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.732864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.733140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.733174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.733417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.733451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.733597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.733629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.733786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.733819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.734036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.734052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.734206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.734223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.734326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.734358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.734493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.734523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.734727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.734759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.734889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.734933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.735088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.735105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.735346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.735379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.735518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.735550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.735738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.735771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.735960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.735977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.736067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.736083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.736320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.736336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.736523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.736539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.736710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.736726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.736907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.736939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.737139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.737173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.737292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.737325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.737529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.737562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.737771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.737805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.737996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.738036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.738258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.738275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.738455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.738471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.738567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.738598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.738786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.738820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.739031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.739064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.739187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.739203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.739410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.739426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.739600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.739633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.739850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.739885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.740074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.136 [2024-12-07 10:10:20.740108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.136 qpair failed and we were unable to recover it. 00:35:52.136 [2024-12-07 10:10:20.740298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.740332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.740529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.740561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.740787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.740827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.741026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.741043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.741244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.741278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.741549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.741581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.741798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.741831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.741979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.742014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.742210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.742244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.742447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.742464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.742640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.742657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.742815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.742831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.742922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.742937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.743111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.743128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.743227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.743242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.743399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.743415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.743534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.743568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.743700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.743733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.743874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.743909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.744195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.744212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.744404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.744419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.744512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.744527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.744648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.744689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.744885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.744919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.745127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.745160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.745380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.745396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.745508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.745541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.745700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.745733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.745991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.746025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.746195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.746212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.746300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.746316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.746423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.746438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.746662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.746694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.746883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.746917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.747167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.747209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.747367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.747384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.747566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.747599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.747727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.747761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.747974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.748007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.748202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.748218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.748440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.748456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.748540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.748555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.748630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.748648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.748866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.748882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.749001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.749016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.749196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.749228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.137 [2024-12-07 10:10:20.749362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.137 [2024-12-07 10:10:20.749393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.137 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.749551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.749584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.749779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.749813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.750013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.750048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.750228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.750246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.750431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.750447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.750575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.750608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.750807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.750839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.750980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.751014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.751198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.751215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.751394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.751427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.751562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.751594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.751852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.751886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.752050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.752084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.752278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.752311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.752421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.752453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.752662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.752695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.752912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.752944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.753154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.753189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.753300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.753334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.753561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.753594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.753805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.753838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.754091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.754127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.754334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.754351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.754544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.754577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.754707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.754740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.754973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.755007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.755220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.755237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.755323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.755338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.755504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.755521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.755620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.755635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.755735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.755751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.755860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.755894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.756200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.756234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.756432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.756465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.756616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.756651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.756843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.756882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.757080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.757114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.757311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.757328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.757500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.757516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.757746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.757777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.757941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.757986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.758186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.758221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.758375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.758408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.758523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.758539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.758762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.758779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.138 [2024-12-07 10:10:20.758935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.138 [2024-12-07 10:10:20.758956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.138 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.759115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.759132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.759306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.759321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.759514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.759530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.759629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.759645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.759802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.759817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.759907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.759922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.760095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.760112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.760215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.760230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.760422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.760438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.760608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.760642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.760836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.760868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.761022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.761056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.761348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.761364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.761465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.761480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.761567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.761583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.761742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.761758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.761933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.761974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.762186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.762220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.762355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.762388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.762497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.762530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.762743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.762777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.762994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.763028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.763287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.763322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.763450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.763467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.763652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.763669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.763774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.763788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.763906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.763939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.764101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.764134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.764342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.764375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.764514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.764552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.764747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.764780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.764999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.765032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.765180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.765214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.765415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.765431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.765584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.765600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.765822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.765838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.765945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.765965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.766140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.766173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.766319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.766351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.766583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.766616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.766806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.766839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.766982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.767015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.767161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.767194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.767380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.767396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.767565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.767597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.767790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.767823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.768051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.768085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.768338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.768371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.768512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.768544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.768760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.768775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.139 [2024-12-07 10:10:20.768943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-07 10:10:20.768965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.139 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.769061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.769076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.769223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.769239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.769319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.769334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.769489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.769506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.769595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.769611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.769705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.769721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.769888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.769920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.770127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.770160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.770350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.770383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.770567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.770601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.770736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.770780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.770961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.770978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.771196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.771212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.771460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.771492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.771691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.771724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.771912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.771945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.772087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.772104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.772273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.772288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.772386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.772405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.772625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.772642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.772756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.772771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.772935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.772977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.773183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.773214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.773343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.773376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.773579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.773614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.773809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.773842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.774093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.774127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.774251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.774283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.774400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.774434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.774701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.774734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.774875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.774907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.775056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.775092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.775297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.775331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.775467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.775500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.775692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.775726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.775917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.775961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.776173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.776206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.776329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.776360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.776579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.776612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.776842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.776874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.777139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.777173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.777318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.777335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.777513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.777545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.777684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.777718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.777908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.777941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.778144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.778181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.778383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.778421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.778704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.778738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.779010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.779044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.140 qpair failed and we were unable to recover it. 00:35:52.140 [2024-12-07 10:10:20.779251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-07 10:10:20.779283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.779488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.779522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.779766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.779799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.780057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.780090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.780284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.780319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.780511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.780528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.780684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.780700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.780866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.780907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.781174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.781209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.781358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.781401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.781514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.781530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.781643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.781658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.781880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.781897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.782001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.782017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.782208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.782253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.782479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.782512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.782702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.782736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.782891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.782924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.783078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.783112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.783344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.783376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.783559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.783593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.783749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.783782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.783994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.784028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.784249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.784283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.784472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.784489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.784595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.784612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.784836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.784852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.785004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.785020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.785190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.785221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.785379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.785413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.785608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.785642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.785770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.785802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.786044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.786060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.786133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.786148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.786265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.786299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.786479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.786511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.786781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.786851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.787049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.787063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.787156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.787168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.787405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.787418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.787593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.787605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.787698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.787709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.787843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.787875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.788005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.788037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.788195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.788208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.788386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.788419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.788632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.788666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.788926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.788973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.789113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.789126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.789299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.789342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.789613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.789645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.789775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.789808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.790077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.790110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.790243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-07 10:10:20.790255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.141 qpair failed and we were unable to recover it. 00:35:52.141 [2024-12-07 10:10:20.790513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.790525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.790678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.790691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.790815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.790849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.791106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.791139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.791269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.791303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.791461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.791473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.791583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.791594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.791702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.791713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.791944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.791991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.792258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.792292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.792405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.792437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.792561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.792594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.792828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.792865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.793017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.793035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.793203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.793218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.793393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.793423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.793618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.793649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.793839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.793872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.794061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.794095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.794303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.794336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.794541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.794573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.794766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.794798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.795082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.795157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.795387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.795424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.795637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.795671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.795873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.795909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.796204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.796222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.796317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.796332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.796495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.796532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.796742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.796773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.797014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.797048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.797235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.797251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.797359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.797391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.797528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.797562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.797842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.797875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.798014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.798035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.798129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.798143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.798312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.798329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.799675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.799731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.800058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.800076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.800243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.800261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.800383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.800416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.800567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.800600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.800864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.800900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.801108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.801124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.801231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.801248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.142 [2024-12-07 10:10:20.801414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.142 [2024-12-07 10:10:20.801445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.142 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.801583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.801615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.801819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.801850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.802064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.802098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.802243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.802260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.802439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.802454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.802615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.802631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.802726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.802741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.802834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.802850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.803073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.803089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.803199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.803215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.803436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.803451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.803665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.803682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.803837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.803852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.804034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.804051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.804151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.804167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.804367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.804399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.804544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.804576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.804696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.804727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.804843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.804873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.805073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.805106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.805226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.805258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.805536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.805566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.805766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.805799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.805979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.806011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.806204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.806237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.806381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.806414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.806615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.806649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.806846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.806880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.807078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.807098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.807255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.807271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.807445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.807460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.807579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.807595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.807699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.807715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.807809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.807843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.807982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.808016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.808227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.808260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.808451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.808484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.808732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.808763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.808919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.808961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.809082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.809115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.809370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.809402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.809970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.810015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.810251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.810286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.810540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.810574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.810796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.810829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.811044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.811079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.811330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.811346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.811501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.811517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.811635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.811669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.811804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.811838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.812057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.812091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.812288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.812321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.812530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.812563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.812758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.812791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.812945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.812991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.813162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.813200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.143 [2024-12-07 10:10:20.813399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.143 [2024-12-07 10:10:20.813474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.143 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.813706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.813743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.813880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.813913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.814145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.814180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.814376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.814409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.814553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.814586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.814729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.814761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.815018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.815053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.815244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.815278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.815490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.815522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.815732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.815764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.815894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.815927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.816105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.816138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.816300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.816334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.816486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.816519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.816709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.816742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.816885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.816918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.817082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.817118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.817333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.817370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.817608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.817627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.817786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.817803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.817900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.817916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.818020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.818043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.818208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.818222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.818367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.818379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.818483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.818495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.818655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.818667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.818883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.818895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.818979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.818991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.819168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.819200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.819324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.819357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.819629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.819662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.819863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.819895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.820013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.820052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.820259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.820292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.820438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.820471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.820688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.820700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.820793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.820804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.820946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.820964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.821026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.821038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.821117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.821128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.821210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.821221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.821311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.821322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.821556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.821568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.821656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.821668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.821757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.821767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.821851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.821863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.144 [2024-12-07 10:10:20.821957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.144 [2024-12-07 10:10:20.821968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.144 qpair failed and we were unable to recover it. 00:35:52.416 [2024-12-07 10:10:20.822055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.416 [2024-12-07 10:10:20.822066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.416 qpair failed and we were unable to recover it. 00:35:52.416 [2024-12-07 10:10:20.822217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.416 [2024-12-07 10:10:20.822229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.822325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.822337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.822439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.822451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.822615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.822627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.822706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.822719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.822798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.822809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.822903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.822914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.823009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.823021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.823181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.823193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.823274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.823285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.823358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.823369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.823517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.823529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.823638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.823650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.823744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.823754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.823911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.823923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.824068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.824080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.824220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.824232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.824326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.824337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.824412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.824423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.824513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.824524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.824668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.824680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.824762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.824773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.824924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.824936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.825033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.825044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.825208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.825219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.825308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.825319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.825383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.825394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.825475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.825486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.825712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.825724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.825804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.825814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.825978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.825994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.826148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.826161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.826243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.826254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.826398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.826410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.826506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.826519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.826610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.826623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.826719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.826732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.826889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.826921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.827139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.827172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.827395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.827428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.827598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.827610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.827756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.417 [2024-12-07 10:10:20.827788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.417 qpair failed and we were unable to recover it. 00:35:52.417 [2024-12-07 10:10:20.827909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.827942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.828069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.828101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.828240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.828272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.828455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.828468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.828549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.828562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.828809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.828821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.828898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.828909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.828985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.828997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.829110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.829143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.829265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.829298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.829438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.829470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.829593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.829626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.829763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.829795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.829963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.830002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.830163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.830176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.830334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.830368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.830562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.830594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.830724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.830756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.830881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.830913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.831124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.831158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.831287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.831321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.831520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.831533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.831621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.831646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.831874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.831933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.832150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.832186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.832362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.832378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.832567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.832601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.832847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.832883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.833074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.833119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.833316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.833351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.833629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.833645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.833815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.833832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.833966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.834001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.834180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.834213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.834419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.834465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.834648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.834664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.834774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.834805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.834927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.834970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.835245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.835279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.835398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.835414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.835576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.835614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.835737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.835769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.835984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.836020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.418 qpair failed and we were unable to recover it. 00:35:52.418 [2024-12-07 10:10:20.836230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.418 [2024-12-07 10:10:20.836263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.836384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.836418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.836544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.836576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.836696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.836730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.836871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.836887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.836985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.837002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.837174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.837190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.837351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.837367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.837530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.837546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.837645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.837660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.837763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.837778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.837881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.837897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.838020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.838055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.838252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.838285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.838479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.838512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.838704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.838737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.839020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.839054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.839263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.839295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.839511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.839544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.839757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.839791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.839928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.839969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.840107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.840123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.840219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.840233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.840438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.840469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.840685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.840717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.840838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.840877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.841138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.841173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.841419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.841453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.841583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.841598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.841796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.841829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.841966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.842001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.842222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.842255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.842390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.842406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.842572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.842606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.842797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.842831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.843023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.843057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.843256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.843290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.843493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.843509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.843735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.843769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.843926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.843969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.844117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.844152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.844339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.844373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.844640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.844674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.419 [2024-12-07 10:10:20.844861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.419 [2024-12-07 10:10:20.844895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.419 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.845053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.845088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.845321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.845355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.845494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.845527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.845760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.845793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.845980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.846016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.846151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.846188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.846328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.846361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.846547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.846581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.846760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.846800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.846987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.847004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.847092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.847102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.847194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.847204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.847281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.847291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.847371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.847382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.847547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.847559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.847720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.847733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.847898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.847928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.848099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.848132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.848276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.848310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.848457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.848490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.848619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.848651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.848816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.848858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.848997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.849029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.849225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.849258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.849533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.849565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.849718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.849751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.849894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.849926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.850138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.850173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.850345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.850358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.850516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.850549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.850741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.850773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.850973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.851007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.851210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.851243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.851428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.851460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.851596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.851629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.851772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.851806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.852009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.852043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.852257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.852290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.420 [2024-12-07 10:10:20.852490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.420 [2024-12-07 10:10:20.852522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.420 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.852657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.852688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.852831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.852863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.853053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.853086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.853366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.853400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.853604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.853638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.853762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.853793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.853897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.853929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.854145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.854179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.854289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.854300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.854390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.854401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.854510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.854543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.854741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.854772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.854981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.855015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.855221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.855254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.855447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.855479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.855670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.855682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.855858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.855892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.856042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.856075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.856260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.856293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.856480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.856493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.856645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.856678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.856806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.856838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.857107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.857145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.857355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.857367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.857540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.857572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.857705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.857739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.858043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.858079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.858303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.858315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.858499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.858510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.858719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.858752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.858901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.858932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.859138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.859171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.859362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.859373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.859441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.859452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.859542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.859553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.859709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.859720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.859809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.859820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.860121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.860155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.860345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.860378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.860520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.860553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.860755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.421 [2024-12-07 10:10:20.860787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.421 qpair failed and we were unable to recover it. 00:35:52.421 [2024-12-07 10:10:20.861013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.861047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.861273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.861308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.861570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.861607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.861779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.861816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.862044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.862078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.862226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.862259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.862524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.862556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.862697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.862730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.862930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.863016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.863230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.863266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.863491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.863507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.863683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.863716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.863989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.864024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.864163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.864195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.864325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.864340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.864521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.864554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.864688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.864721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.864855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.864887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.865044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.865078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.865194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.865227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.865446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.865479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.865720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.865753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.865965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.865999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.866207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.866239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.866508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.866524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.866642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.866658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.866782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.866817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.867016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.867052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.867260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.867276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.867381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.867413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.867530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.867565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.867752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.867784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.867986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.868020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.868278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.868311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.868500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.868518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.868627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.868646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.868813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.868829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.869005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.869038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.869253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.869292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.869444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.869477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.422 qpair failed and we were unable to recover it. 00:35:52.422 [2024-12-07 10:10:20.869618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.422 [2024-12-07 10:10:20.869651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.869840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.869861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.869974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.870000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.870249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.870268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.870372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.870386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.870488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.870504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.870618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.870637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.870838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.870868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.870971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.870984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.871128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.871140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.871305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.871338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.871600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.871633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.871892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.871925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.872131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.872166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.872327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.872339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.872511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.872544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.872746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.872777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.872980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.873014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.873155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.873189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.873412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.873446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.873639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.873673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.873961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.873996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.874242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.874278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.874420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.874437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.874539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.874553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.874653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.874667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.874755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.874771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.874886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.874902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.874990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.875003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.875067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.875078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.875243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.875277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.875510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.875543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.875667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.875699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.875837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.875869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.876070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.876104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.876235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.876277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.876453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.876465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.876628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.876660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.876863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.876896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.877189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.877222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.877434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.877470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.877636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.877653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.877877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.877893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.878056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.423 [2024-12-07 10:10:20.878073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.423 qpair failed and we were unable to recover it. 00:35:52.423 [2024-12-07 10:10:20.878236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.878251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.878420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.878451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.878659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.878690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.878810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.878842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.879074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.879108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.879253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.879287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.879511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.879545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.879693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.879725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.879918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.879960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.880179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.880212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.880400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.880433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.880629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.880646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.880802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.880834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.881021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.881054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.881184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.881216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.881400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.881434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.881633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.881665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.881865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.881897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.882113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.882146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.882288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.882323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.882519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.882552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.882804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.882836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.882969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.883002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.883196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.883229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.883345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.883361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.883466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.883483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.883617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.883632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.883791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.883824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.884028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.884061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.884254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.884286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.884409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.884421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.884609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.884643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.884915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.884961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.885151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.885183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.885376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.885393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.885629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.885662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.885866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.885899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.886119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.886154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.886298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.886331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.886606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.886622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.886709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.886723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.886962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.886980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.887212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.887245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.424 [2024-12-07 10:10:20.887370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.424 [2024-12-07 10:10:20.887403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.424 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.887592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.887625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.887818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.887852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.888062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.888096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.888249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.888265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.888380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.888396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.888548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.888564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.888648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.888662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.888824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.888840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.889005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.889023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.889124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.889138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.889239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.889254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.889421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.889437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.889644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.889658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.889753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.889765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.889910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.889921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.890100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.890118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.890214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.890229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.890399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.890416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.890581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.890613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.890741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.890774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.890928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.890971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.891164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.891197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.891392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.891407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.891559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.891607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.891735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.891767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.892027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.892060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.892338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.892372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.892677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.892693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.892970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.893005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.893163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.893196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.893400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.893431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.893563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.893580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.893808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.893842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.894092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.894126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.894272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.894307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.894561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.894593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.894805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.894821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.894916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.894932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.425 [2024-12-07 10:10:20.895018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.425 [2024-12-07 10:10:20.895034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.425 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.895146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.895177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.895430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.895462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.895664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.895708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.895868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.895886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.895960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.895970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.896077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.896089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.896195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.896205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.896412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.896425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.896610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.896642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.896837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.896870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.897071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.897104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.897317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.897351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.897578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.897611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.897754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.897787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.897931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.897943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.898041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.898053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.898213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.898225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.898400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.898434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.898690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.898728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.898850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.898881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.899077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.899111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.899340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.899372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.899596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.899629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.899833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.899866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.900083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.900116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.900323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.900335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.900484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.900516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.900654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.900686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.900875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.900908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.901045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.901077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.901336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.901370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.901510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.901544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.901789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.901801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.902019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.902032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.902178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.902190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.902368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.902381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.902545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.902558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.902657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.902668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.902799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.902811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.902986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.902999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.903108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.903120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.903212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.903222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.903295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.426 [2024-12-07 10:10:20.903306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.426 qpair failed and we were unable to recover it. 00:35:52.426 [2024-12-07 10:10:20.903403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.903416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.903589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.903624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.903753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.903785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.903976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.904009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.904153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.904186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.904385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.904419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.904621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.904654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.904844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.904876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.905037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.905070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.905276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.905309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.905567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.905601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.905721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.905753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.906017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.906049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.906184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.906217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.906435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.906468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.906744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.906777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.906910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.906923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.907009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.907021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.907265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.907277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.907374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.907385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.907625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.907661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.907857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.907889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.908024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.908058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.908221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.908256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.908454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.908487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.908693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.908705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.908913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.908941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.909101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.909113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.909226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.909236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.909387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.909399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.909562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.909574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.909717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.909729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.909849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.909883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.910090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.910124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.910246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.910278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.910414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.910455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.910538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.910549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.910770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.910782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.910952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.910964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.911029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.911041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.911199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.911213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.911374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.911406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.911526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.427 [2024-12-07 10:10:20.911561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.427 qpair failed and we were unable to recover it. 00:35:52.427 [2024-12-07 10:10:20.911702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.911736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.911936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.911981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.912167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.912179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.912367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.912400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.912589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.912622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.912758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.912791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.912980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.912994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.913170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.913183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.913290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.913301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.913396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.913407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.913495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.913506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.913741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.913754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.913843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.913854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.913994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.914006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.914122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.914135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.914237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.914249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.914419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.914432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.914593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.914625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.914817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.914850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.914981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.915015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.915209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.915242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.915445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.915457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.915543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.915556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.915773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.915806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.915945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.915987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.916143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.916175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.916373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.916405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.916602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.916634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.916834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.916846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.917009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.917022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.917104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.917115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.917228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.917239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.917375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.917388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.917563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.917576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.917753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.917766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.917926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.917938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.918048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.918061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.918165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.918204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.918334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.918368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.918563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.918596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.918778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.918790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.918951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.918964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.919083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.919115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.428 qpair failed and we were unable to recover it. 00:35:52.428 [2024-12-07 10:10:20.919258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.428 [2024-12-07 10:10:20.919291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.919546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.919579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.919770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.919804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.920013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.920046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.920187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.920220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.920421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.920454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.920718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.920751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.920864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.920896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.921072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.921086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.921251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.921285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.921476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.921510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.921670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.921704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.921970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.922004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.922226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.922258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.922398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.922431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.922571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.922608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.922863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.922897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.923098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.923133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.923273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.923306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.923492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.923504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.923665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.923678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.923898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.923931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.924065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.924098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.924382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.924414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.924604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.924637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.924765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.924798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.924927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.924973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.925093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.925106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.925253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.925265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.925418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.925430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.925602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.925635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.925837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.925870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.926068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.926103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.926309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.926342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.926540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.926578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.926761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.926773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.926930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.926974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.927107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.927139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.927409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.429 [2024-12-07 10:10:20.927443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.429 qpair failed and we were unable to recover it. 00:35:52.429 [2024-12-07 10:10:20.927719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.927752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.928059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.928072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.928177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.928190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.928386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.928419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.928560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.928592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.928743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.928776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.928940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.928984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.929263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.929295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.929439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.929473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.929624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.929661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.929819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.929861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.930123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.930169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.930423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.930455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.930657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.930690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.930914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.930956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.931189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.931223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.931478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.931514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.931727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.931740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.931853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.931865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.932040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.932053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.932134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.932147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.932272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.932305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.932552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.932626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.932909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.932963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.933202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.933237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.933389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.933422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.933567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.933600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.933828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.933869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.934060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.934076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.934169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.934208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.934494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.934529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.934788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.934830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.935038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.935072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.935200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.935233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.935436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.935468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.935659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.935698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.935838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.935851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.936009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.936022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.936105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.936117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.936203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.936214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.936311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.936324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.936489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.936512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.936624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.430 [2024-12-07 10:10:20.936655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.430 qpair failed and we were unable to recover it. 00:35:52.430 [2024-12-07 10:10:20.936914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.936956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.937158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.937190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.937386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.937426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.937584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.937618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.937751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.937792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.938003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.938037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.938264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.938297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.938433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.938466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.938669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.938704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.938853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.938889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.939043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.939077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.939235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.939270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.939525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.939558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.939686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.939700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.939917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.939959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.940154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.940187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.940358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.940392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.940528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.940541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.940695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.940707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.940883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.940972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.941232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.941304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.941573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.941610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.941767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.941800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.942010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.942044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.942348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.942381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.942565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.942600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.942787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.942820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.943017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.943034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.943209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.943242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.943454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.943470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.943691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.943706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.943792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.943807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.944049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.944089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.944251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.944284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.944486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.944521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.944731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.944764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.944969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.945003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.945205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.945239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.945510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.945542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.945755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.945787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.945943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.945990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.946137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.946171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.946371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.431 [2024-12-07 10:10:20.946403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.431 qpair failed and we were unable to recover it. 00:35:52.431 [2024-12-07 10:10:20.946621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.946654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.946930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.946972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.947097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.947129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.947394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.947428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.947618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.947634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.947786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.947819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.947969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.948004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.948211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.948244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.948367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.948400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.948552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.948585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.948724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.948757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.948944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.948964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.949149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.949183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.949381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.949415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.949560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.949593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.949801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.949833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.950137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.950212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.950584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.950628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.950915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.950933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.951105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.951121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.951307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.951344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.951495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.951528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.951671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.951706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.951854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.951887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.952114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.952148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.952284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.952317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.952509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.952541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.952678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.952712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.952931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.952951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.953105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.953121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.953239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.953273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.953400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.953433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.953637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.953669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.953859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.953891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.954161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.954197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.954398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.954430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.954615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.954650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.954834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.954867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.955127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.955162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.955363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.955396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.955543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.955576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.955761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.955794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.955921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.432 [2024-12-07 10:10:20.955977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.432 qpair failed and we were unable to recover it. 00:35:52.432 [2024-12-07 10:10:20.956123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.956163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.956303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.956335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.956482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.956516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.956791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.956824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.957077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.957111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.957305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.957338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.957571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.957603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.957878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.957895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.957987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.958002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.958098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.958113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.958212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.958227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.958393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.958427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.958684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.958716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.958873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.958905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.959124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.959159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.959376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.959409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.959604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.959637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.959840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.959873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.960123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.960139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.960335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.960351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.960594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.960611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.960766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.960782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.960956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.960988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.961130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.961163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.961302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.961335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.961462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.961495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.961693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.961709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.961870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.961909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.962113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.962146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.962284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.962316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.962528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.962561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.962770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.962803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.962994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.963027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.963157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.963191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.963469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.963502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.963689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.963721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.963946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.963991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.964177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.964210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.964420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.964453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.964673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.433 [2024-12-07 10:10:20.964690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-12-07 10:10:20.964802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.964817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.964992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.965010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.965127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.965144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.965296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.965312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.965426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.965442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.965546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.965562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.965647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.965662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.965758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.965773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.965897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.965912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.966070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.966087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.966179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.966194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.966346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.966363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.966590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.966622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.966765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.966798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.967018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.967058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.967210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.967245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.967444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.967476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.967681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.967714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.967920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.967936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.968106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.968124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.968282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.968298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.968396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.968412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.968501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.968516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.968633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.968649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.968893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.968926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.969084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.969120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.969314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.969345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.969541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.969575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.969717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.969750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.969905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.969938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.970124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.970157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.970295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.970329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.970520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.970554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.970759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.970792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.970986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.971003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.971153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.971169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.971337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.971375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.971610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.971643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.971831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.971864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.971981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.971998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.972104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.972120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.972210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.972226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.972345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.972361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-12-07 10:10:20.972551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.434 [2024-12-07 10:10:20.972567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.972721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.972737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.972906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.972939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.973166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.973199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.973356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.973389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.973538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.973572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.973762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.973795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.973921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.973936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.974088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.974116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.974210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.974225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.974378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.974395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.974489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.974506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.974616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.974644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.974733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.974746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.974900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.974936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.975103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.975140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.975350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.975385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.975569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.975582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.975660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.975671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.975764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.975776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.975917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.975973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.976133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.976167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.976307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.976340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.976539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.976573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.976778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.976814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.976943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.977001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.977141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.977174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.977309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.977343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.977542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.977574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.977765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.977798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.977943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.977989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.978189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.978223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.978374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.978409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.978543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.978581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.978858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.978871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.979027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.979038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.979109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.979121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.979290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.979323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.979600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.979632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.979765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.979778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.979917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.979931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.980050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.980062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.980272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.980285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.980450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.980462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-12-07 10:10:20.980553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.435 [2024-12-07 10:10:20.980564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.980666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.980678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.980760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.980792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.981077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.981112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.981304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.981339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.981477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.981514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.981638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.981651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.981739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.981751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.981840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.981872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.982010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.982045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.982271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.982305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.982493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.982526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.982668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.982700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.982854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.982865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.982942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.982958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.983110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.983155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.983289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.983322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.983520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.983554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.983698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.983731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.983858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.983871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.984057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.984093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.984290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.984324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.984539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.984574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.984719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.984756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.984919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.984964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.985237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.985271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.985418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.985452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.985590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.985623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.985843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.985877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.986003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.986038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.986311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.986348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.986477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.986517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.986674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.986687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.986845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.986872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.987071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.987106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.987261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.987293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.987430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.987463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.987735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.987768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.987931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.987977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.988178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.988214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.988489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.988522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.988682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.988715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.988899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.436 [2024-12-07 10:10:20.988911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.436 qpair failed and we were unable to recover it. 00:35:52.436 [2024-12-07 10:10:20.989059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.989100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.989218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.989253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.989391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.989428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.989660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.989694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.989901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.989935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.990172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.990213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.990415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.990459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.990558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.990569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.990712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.990724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.990829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.990840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.991011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.991024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.991106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.991117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.991360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.991394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.991526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.991560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.991683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.991716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.991924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.991981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.992122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.992154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.992361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.992396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.992519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.992553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.992630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.992649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.992893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.992927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.993160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.993193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.993342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.993375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.993671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.993703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.993839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.993871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.993997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.994031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.994323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.994357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.994548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.994580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.994785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.994818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.994937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.994952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.995135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.995168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.995315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.995350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.995548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.995583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.995774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.995808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.995938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.995979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.996103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.996136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.996414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.996447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.996641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.996653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.996759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.437 [2024-12-07 10:10:20.996771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.437 qpair failed and we were unable to recover it. 00:35:52.437 [2024-12-07 10:10:20.996920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.996932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.997127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.997161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.997375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.997409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.997521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.997555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.997700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.997713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.997772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.997783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.997869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.997882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.998111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.998123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.998223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.998233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.998311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.998322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.998417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.998448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.998631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.998698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.999014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.999082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.999294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.999327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.999522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.999552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.999739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.999769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:20.999895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:20.999925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.000186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.000199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.000295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.000337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.000607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.000638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.000845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.000875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.001130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.001140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.001302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.001312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.001471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.001481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.001642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.001651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.001809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.001839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.001989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.002020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.002167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.002197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.002344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.002376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.002583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.002614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.002816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.002846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.003117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.003149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.003275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.003305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.003524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.003555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.003703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.003733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.003966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.003976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.004076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.004106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.004325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.004354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.004553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.004581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.004775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.004804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.004920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.004930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.005154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.005165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.438 [2024-12-07 10:10:21.005305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.438 [2024-12-07 10:10:21.005315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.438 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.005480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.005490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.005697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.005707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.005998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.006009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.006112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.006124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.006335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.006345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.006435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.006444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.006527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.006548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.006697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.006705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.006784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.006793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.006876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.006885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.007034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.007044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.007209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.007219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.007368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.007377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.007535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.007544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.007746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.007755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.007901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.007910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.008003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.008012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.008175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.008185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.008361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.008371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.008475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.008486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.008580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.008591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.008742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.008751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.008835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.008844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.008916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.008926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.009003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.009012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.009099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.009108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.009277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.009286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.009363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.009372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.009469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.009478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.009559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.009568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.009718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.009728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.009796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.009805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.009889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.009898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.009991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.010001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.010104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.010114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.010335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.010344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.010428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.010440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.010590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.010599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.010810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.010819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.010911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.010921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.011082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.011092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.011253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.011263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.439 [2024-12-07 10:10:21.011421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.439 [2024-12-07 10:10:21.011431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.439 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.011576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.011587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.011830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.011839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.011916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.011926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.012034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.012045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.012260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.012270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.012405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.012414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.012502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.012511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.012675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.012684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.012754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.012764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.012849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.012860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.012939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.012952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.013035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.013044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.013178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.013188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.013398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.013408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.013488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.013498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.013648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.013657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.013751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.013761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.013932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.013941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.014086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.014095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.014187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.014197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.014344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.014353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.014524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.014534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.014684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.014694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.014768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.014778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.014854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.014863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.014955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.014966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.015194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.015204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.015298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.015307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.015377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.015386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.015485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.015495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.015592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.015602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.015675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.015685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.015829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.015839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.015996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.016006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.016150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.016159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.016253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.016262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.016404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.016414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.016491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.016500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.016593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.016602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.016757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.016767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.016953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.016965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.017055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.017066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.440 qpair failed and we were unable to recover it. 00:35:52.440 [2024-12-07 10:10:21.017223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.440 [2024-12-07 10:10:21.017231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.017398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.017407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.017570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.017580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.017686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.017696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.017792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.017801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.017956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.017966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.018113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.018122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.018258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.018267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.018410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.018420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.018602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.018611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.018682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.018692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.018801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.018811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.018884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.018893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.019036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.019048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.019153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.019163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.019261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.019272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.019423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.019433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.019534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.441 [2024-12-07 10:10:21.019544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.441 qpair failed and we were unable to recover it. 00:35:52.441 [2024-12-07 10:10:21.019692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.309444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.309769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.309783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.309999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.310010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.310230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.310240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.310414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.310424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.310589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.310598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.310743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.310753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.310923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.310933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.311108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.311119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.311275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.311285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.311386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.311396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.311555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.311566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.311745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.311756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.311845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.311856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.707 [2024-12-07 10:10:21.311957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.707 [2024-12-07 10:10:21.311968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.707 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.312123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.312135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.312300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.312311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.312399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.312410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.312573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.312605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.312807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.312838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.313028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.313068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.313285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.313317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.313519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.313552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.313688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.313698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.313788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.313799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.313964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.313976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.314196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.314227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.314347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.314378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.314513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.314544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.314741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.314752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.314924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.314967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.315167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.315205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.315405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.315439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.315670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.315681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.315833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.315844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.316047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.316081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.316221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.316253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.316391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.316421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.316646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.316657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.316859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.316895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.317070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.317103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.317306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.317338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.317619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.317630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.317781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.317792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.317968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.318001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.318114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.318148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.318332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.318369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.318547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.318622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.318794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.318820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.318932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.318946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.319028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.319039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.319121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.319132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.319360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.319371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.319531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.319541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.319703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.319715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.319844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.319878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.320080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.320114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.320389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.320420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.320682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.320693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.320777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.320789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.321001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.321012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.321112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.321124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.321271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.321283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.321393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.321404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.321510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.321522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.321672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.321684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.321921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.321933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.322082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.322095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.322196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.322208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.322374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.322386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.322557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.322589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.322753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.322785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.323039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.323074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.323267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.323300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.323595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.323632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.323840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.323852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.323979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.324013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.324206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.324238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.708 [2024-12-07 10:10:21.324375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.708 [2024-12-07 10:10:21.324407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.708 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.324613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.324624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.324777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.324788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.324896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.324908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.325002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.325014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.325249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.325260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.325400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.325412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.325495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.325507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.325671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.325682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.325848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.325881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.326050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.326085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.326231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.326264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.326452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.326484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.326754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.326766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.326842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.326853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.326952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.326964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.327219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.327230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.327327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.327339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.327527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.327539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.327687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.327731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.327924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.327966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.328084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.328120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.328324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.328356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.328476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.328514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.328709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.328741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.328963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.328997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.329220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.329253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.329448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.329479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.329648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.329661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.329815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.329826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.329888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.329899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.330059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.330094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.330227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.330259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.330452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.330484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.330677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.330689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.330832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.330866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.331121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.331154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.331317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.331349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.331546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.331562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.331727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.331759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.331958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.331991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.332189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.332221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.332412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.332445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.332584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.332616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.332813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.332846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.333053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.333087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.333366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.333398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.333607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.333638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.333825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.333857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.334009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.334043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.334202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.334241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.334379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.334411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.334666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.334681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.334834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.334849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.335075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.335091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.335319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.335353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.335501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.335533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.335659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.335692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.335923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.335985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.336188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.336220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.336408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.336440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.336699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.336733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.336914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.336945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.337218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.337233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.337393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.337408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.337517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.337556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.337809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.337841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.338167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.338200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.338333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.709 [2024-12-07 10:10:21.338365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.709 qpair failed and we were unable to recover it. 00:35:52.709 [2024-12-07 10:10:21.338506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.338537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.338721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.338736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.338828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.338843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.339064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.339096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.339232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.339264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.339519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.339551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.339757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.339789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.340058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.340092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.340230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.340262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.340398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.340413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.340615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.340647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.340853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.340884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.341059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.341092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.341325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.341340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.341511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.341525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.341683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.341699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.341803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.341817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.341981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.341996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.342117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.342148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.342298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.342330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.342606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.342637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.342734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.342748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.342906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.342921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.343022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.343037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.343132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.343146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.343318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.343333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.343424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.343439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.343592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.343606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.343776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.343792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.344054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.344087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.344235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.344268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.344393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.344426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.344542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.344557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.344722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.344736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.344967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.345001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.345140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.345171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.345297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.345330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.345531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.345564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.345714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.345745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.345935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.345980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.346237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.346253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.346481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.346495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.346667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.346699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.346835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.346867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.346993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.347027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.347177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.347208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.347352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.347367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.347532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.347547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.347827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.347860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.348061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.348102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.348305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.348336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.348521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.348553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.348758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.348772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.348938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.348991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.349110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.349142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.349289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.349321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.349622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.349652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.349862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.349894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.350038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.350071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.350353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.350385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.710 [2024-12-07 10:10:21.350521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.710 [2024-12-07 10:10:21.350552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.710 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.350799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.350814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.350919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.350934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.351043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.351058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.351228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.351243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.351428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.351460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.351587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.351619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.351845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.351877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.352087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.352121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.352302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.352318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.352424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.352438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.352599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.352614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.352792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.352806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.352985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.353019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.353136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.353172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.353306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.353337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.353457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.353495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.353702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.353716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.353894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.353926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.354134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.354166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.354312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.354346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.354524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.354538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.354628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.354643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.354751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.354766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.354944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.354987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.355095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.355126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.355269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.355302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.355516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.355547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.355808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.355822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.355968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.355984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.356087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.356102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.356220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.356235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.356396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.356433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.356624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.356656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.356790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.356822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.356981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.357016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.357205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.357237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.357344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.357376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.357571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.357587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.357691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.357704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.357923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.357937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.358165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.358180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.358301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.358315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.358512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.358543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.358765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.358799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.358929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.358970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.359091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.359123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.359381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.359414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.359598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.359628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.359817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.359832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.359973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.359988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.360089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.360104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.360215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.360254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.360562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.360594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.360740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.360772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.360964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.360979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.361175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.361207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.361461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.361533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.361752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.361789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.361962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.361999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.362122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.362156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.362355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.362394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.362559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.362574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.362695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.362728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.362855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.362885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.363099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.363133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.363331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.363363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.363568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.363600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.363813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.711 [2024-12-07 10:10:21.363846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.711 qpair failed and we were unable to recover it. 00:35:52.711 [2024-12-07 10:10:21.364003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.364036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.364231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.364273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.364517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.364533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.364711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.364744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.365019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.365053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.365316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.365349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.365537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.365569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.365713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.365728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.365821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.365836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.365944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.365965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.366142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.366157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.366232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.366247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.366406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.366427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.366600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.366632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.366842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.366876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.367009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.367043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.367265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.367298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.367497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.367529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.367726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.367740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.367909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.367925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.368091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.368107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.368194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.368208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.368360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.368375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.368459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.368500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.368730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.368761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.368990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.369024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.369287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.369320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.369519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.369534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.369635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.369649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.369802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.369817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.369996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.370030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.370232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.370264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.370471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.370486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.370715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.370749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.371031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.371065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.371294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.371328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.371516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.371548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.371748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.371781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.371972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.372005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.372211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.372243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.372371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.372403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.372664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.372682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.372829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.372843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.372961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.372977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.373089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.373104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.373242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.373256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.373360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.373374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.373495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.373529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.373783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.373815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.374050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.374065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.374311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.374344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.374578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.374611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.374821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.374853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.375063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.375097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.375287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.375319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.375579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.375612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.375829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.375876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.376027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.376042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.376195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.376210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.376362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.376377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.376473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.376488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.376585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.376600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.376691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.376706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.376934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.376978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.712 [2024-12-07 10:10:21.377184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.712 [2024-12-07 10:10:21.377217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.712 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.377404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.377437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.377578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.377592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.377775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.377791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.377901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.377917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.378082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.378098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.378254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.378288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.378436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.378469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.378676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.378709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.378893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.378907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.379136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.379170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.379375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.379407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.379630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.379646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.379749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.379764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.379890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.379922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.713 [2024-12-07 10:10:21.380226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.713 [2024-12-07 10:10:21.380259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.713 qpair failed and we were unable to recover it. 00:35:52.728 [2024-12-07 10:10:21.380477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.728 [2024-12-07 10:10:21.380511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.728 qpair failed and we were unable to recover it. 00:35:52.728 [2024-12-07 10:10:21.380769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.728 [2024-12-07 10:10:21.380808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.728 qpair failed and we were unable to recover it. 00:35:52.728 [2024-12-07 10:10:21.381002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.728 [2024-12-07 10:10:21.381018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.728 qpair failed and we were unable to recover it. 00:35:52.728 [2024-12-07 10:10:21.381177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.728 [2024-12-07 10:10:21.381211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.728 qpair failed and we were unable to recover it. 00:35:52.728 [2024-12-07 10:10:21.381504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.728 [2024-12-07 10:10:21.381536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.728 qpair failed and we were unable to recover it. 00:35:52.728 [2024-12-07 10:10:21.381794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.728 [2024-12-07 10:10:21.381827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.728 qpair failed and we were unable to recover it. 00:35:52.728 [2024-12-07 10:10:21.382019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.728 [2024-12-07 10:10:21.382053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.728 qpair failed and we were unable to recover it. 00:35:52.728 [2024-12-07 10:10:21.382204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.728 [2024-12-07 10:10:21.382236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.728 qpair failed and we were unable to recover it. 00:35:52.728 [2024-12-07 10:10:21.382379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.728 [2024-12-07 10:10:21.382412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.728 qpair failed and we were unable to recover it. 00:35:52.728 [2024-12-07 10:10:21.382569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.728 [2024-12-07 10:10:21.382614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.728 qpair failed and we were unable to recover it. 00:35:52.728 [2024-12-07 10:10:21.382713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.728 [2024-12-07 10:10:21.382727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.728 qpair failed and we were unable to recover it. 00:35:52.728 [2024-12-07 10:10:21.382968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.728 [2024-12-07 10:10:21.383000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.728 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.383287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.383320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.383586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.383618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.383822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.383854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.384049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.384084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.384285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.384318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.384519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.384558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.384722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.384737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.384959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.384980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.385219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.385234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.385334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.385349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.385443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.385458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.385705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.385719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.385884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.385917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.386071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.386105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.386226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.386257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.386396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.386429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.386667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.386740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.386892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.386927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.387150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.387184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.387385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.387418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.387688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.387703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.387871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.387904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.388157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.388191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.388451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.388466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.388606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.388622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.388872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.388908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.389181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.389214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.389410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.389443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.389717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.389749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.389970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.389986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.390182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.390217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.390470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.390503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.390652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.390686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.390883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.390897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.391006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.391021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.391190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.391223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.391504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.391537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.391796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.391811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.391918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.391933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.392112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.392146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.392411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.392444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.392630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.392664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.392809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.392843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.393048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.393081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.393283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.393316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.393540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.393555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.393781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.393813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.393962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.393995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.394201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.394233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.394415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.394448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.394634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.394649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.394799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.394814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.394906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.394920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.395099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.395131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.395325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.395364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.395511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.395543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.395742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.395758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.395922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.395964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.396253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.396286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.396504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.396535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.396780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.396794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.397026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.397041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.397214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.397229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.397401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.397433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.397586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.397619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.397814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.397844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.398030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.398045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.398155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.398188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.398386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.398417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.398627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.398659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.398808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.398823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.729 [2024-12-07 10:10:21.398994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.729 [2024-12-07 10:10:21.399009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.729 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.399094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.399109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.399211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.399226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.399324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.399338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.399590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.399623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.399826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.399858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.400011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.400043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.400250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.400282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.400461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.400493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.400692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.400706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.400863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.400877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.401029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.401044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.401210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.401242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.401501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.401533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.401721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.401753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.401973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.402006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.402260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.402291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.402547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.402579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.402711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.402742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.402980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.402996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.403162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.403195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.403449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.403482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.403676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.403691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.403862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.403877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.403979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.403994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.404093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.404113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.404203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.404218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.404370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.404384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.404609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.404641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.404851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.404882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.405041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.405074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.405226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.405258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.405456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.405487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.405684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.405715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.405909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.405923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.406032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.406070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.406293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.406325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.406509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.406541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.406748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.406762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.406957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.406991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.407248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.407279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.407468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.407500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.407619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.407647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.407821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.407836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.407908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.407922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.408219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.408251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.408446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.408477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.408619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.408652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.408848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.408862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.408973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.408988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.409154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.409168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.409388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.409402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.409537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.409609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.409774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.409810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.410035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.410070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.410221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.410254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.410509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.410540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.410723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.410737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.410834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.410848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.411012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.411028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.411222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.411236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.411389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.411403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.411509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.411523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.411768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.411782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.411891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.411906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.412151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.412185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.412338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.412369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.412479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.412513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.412706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.412720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.412940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.412995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.413273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.730 [2024-12-07 10:10:21.413305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.730 qpair failed and we were unable to recover it. 00:35:52.730 [2024-12-07 10:10:21.413565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.413597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.413904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.413935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.414050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.414082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.414334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.414366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.414624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.414656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.414870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.414885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.414981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.415024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.415177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.415209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.415330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.415363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.415498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.415529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.415870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.415902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.416103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.416135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.416337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.416369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.416645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.416677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.416877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.416909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.417126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.417159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.417384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.417416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.417541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.417573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.417822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.417853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.418054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.418087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.418293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.418325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.418465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.418503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.418784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.418822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.418964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.418999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.419231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.419263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.419413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.419433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.419619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.419633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.419720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.419734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.419977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.420011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.420117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.420148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.420361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.420393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.420644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.420660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.420883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.420901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.421070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.421086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.421252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.421269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.421437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.421455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.421696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.421711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.421777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.421791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.421945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.421964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.422056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.422070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.422170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.422185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.422294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.422314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.422424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.422439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.422510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.422530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.422635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.422651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:52.731 [2024-12-07 10:10:21.422759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.731 [2024-12-07 10:10:21.422773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:52.731 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.422886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.422900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.423070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.423086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.423244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.423264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.423500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.423522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.423631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.423652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.423892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.423913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.424086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.424104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.424215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.424229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.424406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.424421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.424574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.424589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.424665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.424695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.424919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.424963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.425097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.425128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.425312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.425344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.425498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.425530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.425834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.425865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.426007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.426048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.426190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.426221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.426487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.426519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.426750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.426783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.426978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.427011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.427214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.427246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.427363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.427395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.427617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.427648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.427781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.427812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.428008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.428041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.428265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.428297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.428583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.428615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.428903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.428940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.429032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.429047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.429149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.429164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.429381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.429395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.429526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.429558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.429688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.429720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.429868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.429899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.430133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.430165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.430420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.430451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.430587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.430619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.430743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.430775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.430916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.430977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.431041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2166f30 (9): Bad file descriptor 00:35:53.010 [2024-12-07 10:10:21.431373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.431406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.431598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.431614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.431777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.010 [2024-12-07 10:10:21.431792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.010 qpair failed and we were unable to recover it. 00:35:53.010 [2024-12-07 10:10:21.431933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.431981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.432185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.432217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.432416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.432447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.432663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.432677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.432901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.432933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.433213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.433245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.433498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.433528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.433735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.433767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.433954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.433970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.434202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.434233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.434493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.434525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.434715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.434730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.434977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.435009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.435234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.435273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.435415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.435446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.435647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.435677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.435870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.435902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.436164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.436178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.436357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.436388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.436597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.436629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.436921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.436961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.437117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.437147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.437353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.437386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.437571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.437601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.437789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.437820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.438056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.438071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.438252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.438283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.438486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.438518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.438705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.438737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.438984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.438999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.439173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.439203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.439381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.439413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.439610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.439641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.439744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.439759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.440007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.440040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.440219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.440249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.440457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.440489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.440763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.440795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.440956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.440988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.441184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.441216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.441356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.441372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.441547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.441578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.441777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.441808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.441966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.442000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.442252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.442283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.442427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.442458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.442752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.442784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.442982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.443015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.443269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.443302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.443507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.443539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.443724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.443754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.443941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.443983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.444139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.444171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.444426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.444463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.444599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.444630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.444768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.444800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.444999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.445014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.445257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.445288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.445480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.445512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.445764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.445806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.445976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.445999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.446153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.446174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.446334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.446348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.446510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.446542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.446805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.446838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.447042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.447073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.447282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.011 [2024-12-07 10:10:21.447313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.011 qpair failed and we were unable to recover it. 00:35:53.011 [2024-12-07 10:10:21.447482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.447515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.447700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.447732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.447975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.447990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.448190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.448222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.448343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.448375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.448493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.448524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.448752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.448784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.448975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.448989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.449089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.449102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.449262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.449276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.449388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.449420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.449607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.449637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.449897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.449930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.450146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.450161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.450330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.450361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.450466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.450498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.450754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.450785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.450903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.450917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.451087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.451102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.451206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.451220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.451328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.451358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.451533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.451565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.451792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.451824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.452095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.452109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.452275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.452289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.452507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.452521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.452700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.452738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.452892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.452924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.453212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.453243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.453447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.453479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.453701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.453732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.453932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.453946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.454127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.454142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.454296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.454331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.454528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.454558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.454837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.454874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.454993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.455008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.455212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.455244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.455453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.455483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.455740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.455773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.455938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.456000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.456209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.456241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.456456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.456488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.456738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.456769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.456960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.456975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.457142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.457156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.457267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.457281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.457457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.457471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.457623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.457637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.457805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.457820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.457982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.458014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.458276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.458309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.458590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.458622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.458904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.458937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.459070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.459087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.459323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.459338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.459512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.459527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.459699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.459714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.459973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.460006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.460273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.460304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.460585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.012 [2024-12-07 10:10:21.460631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.012 qpair failed and we were unable to recover it. 00:35:53.012 [2024-12-07 10:10:21.460777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.460792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.461012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.461045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.461187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.461220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.461365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.461396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.461596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.461628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.461880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.461895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.462069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.462103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.462288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.462320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.462597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.462629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.462822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.462853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.462995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.463027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.463309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.463324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.463613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.463645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.463840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.463881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.463992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.464008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.464181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.464213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.464411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.464442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.464647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.464681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.464927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.464941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.465116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.465156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.465294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.465326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.465602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.465634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.465756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.465787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.466067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.466083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.466298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.466313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.466407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.466421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.466570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.466615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.466797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.466829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.467029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.467061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.467276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.467290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.467461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.467493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.467677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.467707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.467909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.467941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.468163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.468196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.468404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.468435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.468718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.468750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.468899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.468913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.469030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.469044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.469215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.469229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.469346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.469378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.469666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.469698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.469837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.469868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.470017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.470050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.470233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.470265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.470462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.470477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.470708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.470723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.470943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.470991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.471273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.471304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.471455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.471486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.471786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.471818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.472046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.472060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.472220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.472234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.472399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.472413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.472709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.472741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.472937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.472958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.473178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.473210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.473406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.473439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.473651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.473682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.473893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.473924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.474093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.474126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.474283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.474314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.474480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.474512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.474617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.474649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.474861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.474875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.474977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.475010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.475275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.475306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.475507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.475540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.475791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.475822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.013 [2024-12-07 10:10:21.475967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.013 [2024-12-07 10:10:21.476000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.013 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.476135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.476167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.476361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.476392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.476672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.476704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.476852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.476884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.477019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.477052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.477138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.477153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.477301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.477315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.477477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.477491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.477646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.477679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.477878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.477910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.478170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.478203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.478463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.478495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.478747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.478785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.478929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.478943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.479142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.479174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.479369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.479400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.479575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.479607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.479747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.479779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.479892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.479927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.480041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.480056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.480146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.480161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.480310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.480324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.480457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.480488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.480691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.480722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.480920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.480963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.481106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.481121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.481355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.481369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.481466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.481480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.481647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.481679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.481930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.481973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.482122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.482153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.482356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.482387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.482651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.482683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.482888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.482919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.483130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.483163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.483351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.483383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.483584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.483614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.483812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.483843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.484149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.484187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.484395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.484409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.484570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.484584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.484839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.484870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.485144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.485177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.485380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.485412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.485654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.485686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.485960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.485976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.486191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.486206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.486361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.486376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.486543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.486574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.486826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.486858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.487041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.487056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.487304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.487334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.487549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.487581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.487840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.487871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.488143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.488175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.488432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.488464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.488676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.488707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.488904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.488937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.489239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.489253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.489453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.489467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.489668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.489682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.489823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.489860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.490116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.490149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.490429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.490460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.490735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.490766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.491023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.491055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.491311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.491342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.014 qpair failed and we were unable to recover it. 00:35:53.014 [2024-12-07 10:10:21.491615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.014 [2024-12-07 10:10:21.491659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.491854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.491867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.492023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.492055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.492328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.492360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.492568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.492600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.492826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.492864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.493053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.493085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.493239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.493271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.493472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.493503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.493781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.493813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.494100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.494115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.494307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.494338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.494527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.494558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.494758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.494790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.494939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.494958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.495096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.495111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.495222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.495236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.495411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.495442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.495562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.495592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.495772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.495805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.495933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.495975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.496125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.496156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.496346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.496378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.496513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.496527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.496676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.496690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.496874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.496888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.497000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.497015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.497102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.497117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.497236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.497267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.497384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.497414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.497611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.497643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.497906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.497920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.498086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.498104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.498198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.498213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.498409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.498423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.498535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.498549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.498646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.498660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.498750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.498765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.498931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.498945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.499123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.499154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.499371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.499403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.499604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.499635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.499835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.499849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.500130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.500145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.500250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.500264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.500429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.500460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.500675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.500708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.500939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.500981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.501277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.501291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.501467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.501482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.501644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.501659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.501913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.501945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.502154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.502186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.502446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.502477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.502619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.502651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.502788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.502820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.503000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.503033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.503264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.503279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.503428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.503442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.503721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.503753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.503960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.503975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.504069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.504084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.504215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.504229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.015 qpair failed and we were unable to recover it. 00:35:53.015 [2024-12-07 10:10:21.504446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.015 [2024-12-07 10:10:21.504461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.504612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.504627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.504808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.504823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.504985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.505000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.505072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.505087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.505296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.505327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.505512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.505544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.505692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.505724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.505972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.505986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.506145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.506176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.506395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.506428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.506636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.506668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.506853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.506885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.507021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.507036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.507226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.507240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.507408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.507423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.507611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.507643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.507786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.507817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.508014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.508047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.508281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.508296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.508453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.508484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.508632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.508663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.508787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.508820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.508939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.508998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.509285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.509316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.509466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.509496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.509600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.509614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.509787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.509819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.510082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.510116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.510306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.510338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.510465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.510496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.510714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.510746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.510889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.510921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.511226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.511240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.511415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.511430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.511532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.511546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.511796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.511827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.512061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.512100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.512391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.512422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.512647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.512679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.512934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.512985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.513258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.513272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.513418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.513431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.513542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.513556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.513649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.513662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.513769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.513783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.513945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.513965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.514142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.514172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.514302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.514333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.514562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.514594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.514754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.514768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.514878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.514892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.514974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.514989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.515069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.515083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.515271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.515301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.515421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.515452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.515708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.515739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.515893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.515925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.516142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.516156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.516238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.516252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.516503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.516540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.516796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.516828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.517050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.517083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.517217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.517232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.517458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.517495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.517691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.517722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.517860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.517892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.517997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.016 [2024-12-07 10:10:21.518012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.016 qpair failed and we were unable to recover it. 00:35:53.016 [2024-12-07 10:10:21.518189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.518203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.518386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.518401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.518486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.518500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.518745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.518777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.518963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.518996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.519137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.519168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.519373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.519405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.519599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.519631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.519880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.519894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.520054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.520069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.520244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.520276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.520559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.520590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.520863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.520895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.521037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.521070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.521298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.521330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.521462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.521476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.521722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.521754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.521896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.521927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.522168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.522201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.522350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.522382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.522586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.522618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.522799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.522813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.523035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.523067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.523217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.523261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.523538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.523569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.523754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.523785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.523904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.523936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.524130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.524145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.524316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.524331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.524440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.524455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.524630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.524661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.524837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.524851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.525057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.525090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.525232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.525264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.525449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.525481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.525739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.525771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.525971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.526004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.526221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.526257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.526442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.526476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.526693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.526726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.526942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.526999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.527265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.527297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.527480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.527511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.527648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.527679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.527962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.527996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.528268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.528283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.528458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.528472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.528648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.528663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.528826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.528840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.529011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.529044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.529246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.529287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.529481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.529511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.529640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.529670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.529871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.529903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.530164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.530197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.530330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.530362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.530564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.530595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.530850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.530882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.531155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.531187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.531328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.531360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.531560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.531592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.531775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.531805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.532061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.532094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.532275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.532289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.532458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.532491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.532680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.532711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.532867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.532899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.533168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.533199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.533462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.533493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.017 [2024-12-07 10:10:21.533681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.017 [2024-12-07 10:10:21.533695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.017 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.533870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.533901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.534099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.534131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.534254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.534285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.534469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.534499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.534645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.534676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.534866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.534897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.535179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.535193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.535370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.535402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.535609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.535639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.535922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.535965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.536168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.536200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.536351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.536381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.536655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.536688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.536894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.536925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.537142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.537189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.537379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.537393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.537588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.537602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.537820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.537834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.537945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.537992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.538179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.538210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.538403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.538441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.538695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.538726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.538858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.538890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.539186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.539219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.539493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.539524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.539805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.539836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.540032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.540065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.540303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.540317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.540475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.540506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.540711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.540742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.540961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.540993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.541269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.541283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.541370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.541384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.541534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.541567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.541864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.541896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.542106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.542121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.542289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.542324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.542472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.542503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.542783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.542813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.543036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.543069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.543198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.543231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.543363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.543393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.543546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.543560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.543733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.543764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.543967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.544000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.544273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.544304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.544430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.544462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.544598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.544629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.544765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.544794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.544982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.544997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.545175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.545207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.545465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.545497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.545707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.545739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.545863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.545894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.546125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.546139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.546363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.546394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.546647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.546679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.546874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.546904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.547237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.547270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.547467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.547499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.547682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.547699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.547862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.547876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.547989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.018 [2024-12-07 10:10:21.548004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.018 qpair failed and we were unable to recover it. 00:35:53.018 [2024-12-07 10:10:21.548173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.548203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.548378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.548410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.548557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.548587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.548862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.548892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.549108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.549144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.549378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.549392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.549540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.549554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.549712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.549743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.550015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.550048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.550253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.550284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.550437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.550469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.550680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.550713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.550891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.550904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.551061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.551093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.551289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.551320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.551575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.551607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.551807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.551838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.552036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.552068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.552196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.552210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.552370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.552384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.552476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.552491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.552594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.552608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.552788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.552819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.553023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.553055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.553221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.553290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.553510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.553545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.553749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.553781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.553983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.554017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.554223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.554255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.554479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.554510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.554768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.554782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.555035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.555068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.555364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.555395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.555543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.555574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.555854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.555887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.556090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.556105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.556355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.556387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.556595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.556628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.556862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.556895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.557052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.557085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.557303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.557336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.557541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.557573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.557724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.557757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.557878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.557893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.557996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.558011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.558230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.558245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.558397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.558430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.558619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.558650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.558801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.558832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.559112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.559127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.559332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.559346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.559531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.559571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.559757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.559788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.559963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.559997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.560134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.560166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.560367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.560383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.560603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.560618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.560783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.560816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.561032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.561067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.561268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.561299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.561519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.561550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.561767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.561799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.562016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.562049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.562252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.562266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.562469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.562501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.562664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.562695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.562979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.563014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.563167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.019 [2024-12-07 10:10:21.563198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.019 qpair failed and we were unable to recover it. 00:35:53.019 [2024-12-07 10:10:21.563324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.563356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.563590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.563621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.563755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.563788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.563991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.564025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.564216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.564247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.564521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.564553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.564758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.564791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.564929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.564943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.565117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.565133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.565306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.565320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.565499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.565538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.565747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.565779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.565977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.566011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.566130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.566144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.566326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.566359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.566611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.566643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.566776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.566808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.567022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.567037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.567134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.567148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.567344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.567359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.567524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.567539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.567696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.567727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.567924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.567965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.568102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.568135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.568373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.568406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.568684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.568716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.568909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.568924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.569018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.569034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.569230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.569246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.569393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.569409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.569631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.569663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.569933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.569954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.570073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.570089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.570191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.570206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.570313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.570345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.570558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.570590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.570738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.570769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.570971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.571012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.571208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.571224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.571378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.571392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.571653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.571685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.571945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.571986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.572162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.572194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.572461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.572476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.572670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.572684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.572837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.572852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.573003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.573018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.573122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.573137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.573303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.573318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.573487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.573501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.573739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.573772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.574029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.574045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.574247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.574279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.574422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.574455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.574604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.574636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.574898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.574930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.575160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.575194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.575410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.575425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.575516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.575531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.575704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.575725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.575959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.575992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.576206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.576238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.576376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.576391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.576492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.576506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.576669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.576684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.576878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.020 [2024-12-07 10:10:21.576893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.020 qpair failed and we were unable to recover it. 00:35:53.020 [2024-12-07 10:10:21.577000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.577016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.577166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.577181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.577350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.577364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.577475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.577507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.577651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.577683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.577891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.577923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.578082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.578113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.578295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.578327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.578481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.578514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.578734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.578765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.578968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.579000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.579206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.579239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.579485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.579510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.579608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.579622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.579787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.579802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.579963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.579979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.580077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.580092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.580295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.580310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.580404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.580419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.580571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.580585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.580769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.580783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.580911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.580944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.581095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.581128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.581349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.581381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.581514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.581546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.581671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.581702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.582009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.582042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.582321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.582353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.582492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.582523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.582754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.582770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.582945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.582997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.583184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.583216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.583371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.583403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.583678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.583709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.583854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.583886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.584101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.584134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.584337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.584352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.584459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.584473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.584649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.584664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.584844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.584882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.585083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.585116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.585415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.585448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.585676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.585708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.585846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.585878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.586093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.586109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.586262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.586276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.586430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.586444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.586612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.586645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.586867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.586898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.587154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.587186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.587332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.587364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.587563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.587596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.587870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.587902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.588148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.588163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.588332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.588346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.588452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.588466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.588581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.588596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.588848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.588879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.589084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.589116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.589377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.589409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.589554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.589586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.589792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.589825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.590096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.590112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.590373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.590387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.590562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.590594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.590804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.590836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.591021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.591060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.591233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.021 [2024-12-07 10:10:21.591248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.021 qpair failed and we were unable to recover it. 00:35:53.021 [2024-12-07 10:10:21.591404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.591436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.591572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.591603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.591881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.591913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.592117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.592133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.592319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.592350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.592557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.592589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.592872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.592905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.593016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.593030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.593185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.593198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.593322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.593356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.593639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.593670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.593859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.593891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.594119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.594153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.594306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.594338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.594551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.594582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.594790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.594822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.595034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.595050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.595148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.595162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.595245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.595261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.595360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.595375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.595574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.595589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.595771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.595785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.595895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.595926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.596078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.596110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.596310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.596344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.596530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.596545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.596743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.596758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.596931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.596950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.597124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.597139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.597317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.597350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.597516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.597548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.597757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.597790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.597954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.597987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.598254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.598287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.598460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.598474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.598643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.598674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.598888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.598920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.599203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.599235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.599398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.599429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.599581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.599614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.599754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.599786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.599974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.600007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.600196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.600211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.600374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.600388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.600561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.600595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.600797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.600829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.601096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.601140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.601246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.601260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.601444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.601478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.601676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.601709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.601971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.602005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.602112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.602126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.602278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.602293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.602451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.602466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.602708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.602727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.602844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.602860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.603019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.603035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.603215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.603231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.603397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.603414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.603657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.603671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.603835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.603851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.603968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.603984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.604076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.604090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.604326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.604341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.604451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.604465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.604652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.604685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.604829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.604871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.605068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.605101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.605294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.605309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.022 qpair failed and we were unable to recover it. 00:35:53.022 [2024-12-07 10:10:21.605496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.022 [2024-12-07 10:10:21.605527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.605779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.605810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.606040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.606073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.606207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.606221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.606330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.606345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.606440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.606454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.606614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.606629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.606789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.606805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.606882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.606897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.607051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.607066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.607222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.607253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.607466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.607499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.607699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.607732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.607970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.608004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.608247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.608281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.608481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.608496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.608676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.608707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.608855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.608888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.609096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.609130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.609311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.609325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.609445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.609460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.609561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.609575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.609747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.609762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.609959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.609979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.610149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.610166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.610250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.610265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.610365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.610380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.610466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.610480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.610595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.610627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.610854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.610887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.611115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.611149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.611261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.611278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.611477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.611513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.611728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.611764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.612022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.612055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.612265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.612280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.612444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.612459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.612651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.612666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.612771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.612787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.612898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.612913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.613102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.613118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.613286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.613316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.613468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.613499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.613754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.613786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.614064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.614097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.614351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.614382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.614561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.614577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.614737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.614752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.614870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.614910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.615082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.615116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.615369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.615401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.615633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.615670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.615795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.615827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.615970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.616004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.616202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.616234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.616493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.616525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.616715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.616747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.616946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.616986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.617139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.617153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.617323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.617356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.617541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.617573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.617722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.617754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.617884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.617916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.618162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.023 [2024-12-07 10:10:21.618231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.023 qpair failed and we were unable to recover it. 00:35:53.023 [2024-12-07 10:10:21.618450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.618486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.618741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.618811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.618997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.619014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.619180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.619194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.619353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.619384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.619595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.619626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.619863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.619895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.620108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.620123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.620323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.620355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.620490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.620522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.620731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.620764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.620902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.620940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.621040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.621055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.621241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.621256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.621411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.621426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.621581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.621596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.621743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.621757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.622012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.622027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.622202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.622218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.622316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.622331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.622556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.622587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.622722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.622754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.622976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.623009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.623190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.623204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.623374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.623389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.623540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.623572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.623707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.623738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.623940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.623984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.624144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.624187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.624337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.624353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.624553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.624586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.624885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.624918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.625184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.625217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.625438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.625471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.625689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.625721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.625998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.626032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.626192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.626207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.626325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.626340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.626438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.626453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.626701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.626734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.626940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.626986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.627136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.627177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.627359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.627374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.627541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.627555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.627663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.627678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.627865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.627881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.628026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.628041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.628219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.628251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.628460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.628493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.628644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.628676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.628878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.628910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.629125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.629158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.629285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.629317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.629451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.629466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.629566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.629581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.629728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.629743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.629891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.629923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.630074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.630107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.630232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.630264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.630500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.630533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.630735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.630767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.630890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.630922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.631067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.631082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.631181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.631196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.631325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.631357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.631506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.631539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.631744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.631777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.024 qpair failed and we were unable to recover it. 00:35:53.024 [2024-12-07 10:10:21.631921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.024 [2024-12-07 10:10:21.631966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.632169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.632202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.632413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.632446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.632583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.632597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.632711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.632728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.632880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.632894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.633138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.633153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.633314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.633329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.633415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.633430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.633514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.633529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.633603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.633618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.633719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.633734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.633827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.633841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.633958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.633974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.634086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.634100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.634187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.634201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.634327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.634359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.634565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.634598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.634800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.634832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.635035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.635052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.635159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.635175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.635273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.635287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.635465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.635497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.635723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.635755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.635971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.636019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.636123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.636138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.636226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.636240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.636434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.636449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.636562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.636578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.636660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.636674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.636785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.636800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.636962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.636977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.637064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.637078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.637249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.637286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.637437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.637471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.637679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.637711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.637926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.637965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.638123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.638157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.638356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.638389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.638573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.638588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.638773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.638807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.639039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.639078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.639273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.639306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.639464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.639497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.639700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.639732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.640019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.640052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.640176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.640192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.640301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.640316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.640417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.640432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.640532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.640547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.640725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.640741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.640831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.640845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.641065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.641080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.641187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.641207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.641385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.641417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.641566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.641600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.641737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.641769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.641903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.641935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.642155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.642187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.642303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.642318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.642410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.642425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.642678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.642693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.642927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.642942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.643165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.643180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.025 [2024-12-07 10:10:21.643284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.025 [2024-12-07 10:10:21.643299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.025 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.643450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.643464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.643632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.643648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.643754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.643768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.643893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.643908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.643996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.644011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.644222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.644237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.644329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.644343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.644429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.644444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.644611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.644626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.644802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.644834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.645019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.645052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.645189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.645221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.645359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.645375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.645528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.645542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.645688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.645705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.645814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.645843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.645977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.646015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.646170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.646203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.646402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.646434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.646556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.646587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.646708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.646742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.647032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.647065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.647215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.647230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.647421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.647450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.647581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.647613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.647814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.647846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.648130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.648163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.648297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.648311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.648414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.648429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.648578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.648593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.648695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.648711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.648820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.648835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.648991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.649007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.649155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.649189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.649315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.649347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.649546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.649580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.649703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.649735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.649874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.649906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.650048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.650081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.650294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.650325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.651575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.651604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.651856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.651872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.652013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.652029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.652258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.652292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.652424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.652461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.652633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.652647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.652731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.652746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.652912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.652927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.653109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.653124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.653348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.653381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.653572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.653606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.653767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.653800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.653943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.653986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.654254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.654287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.654503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.654535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.654731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.654764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.654898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.654938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.655103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.655136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.655253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.655268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.655421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.655457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.655619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.655652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.655771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.655804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.655941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.655988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.656197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.656231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.656364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.656393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.656498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.656513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.656609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.656624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.656880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.656894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.657002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.657019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.657130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.026 [2024-12-07 10:10:21.657145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.026 qpair failed and we were unable to recover it. 00:35:53.026 [2024-12-07 10:10:21.657259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.657273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.657382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.657397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.657612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.657626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.657773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.657788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.657879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.657894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.658011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.658026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.658112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.658127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.658216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.658231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.658415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.658430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.658522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.658537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.658709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.658724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.658828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.658842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.658992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.659006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.659125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.659140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.659220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.659235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.659404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.659420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.659510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.659524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.659685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.659700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.659929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.659944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.660043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.660057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.660153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.660168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.660252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.660267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.660364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.660379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.660483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.660498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.660604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.660619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.660837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.660852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.660954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.660972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.661056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.661071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.661226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.661240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.661326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.661340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.661443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.661457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.661572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.661587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.661684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.661698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.661854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.661869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.661970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.661987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.662070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.662085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.662179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.662194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.662381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.662396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.662552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.662566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.662719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.662733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.662936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.662962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.663048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.663062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.663155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.663170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.663324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.663340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.663425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.663439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.663534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.663548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.663647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.663663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.663820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.663835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.663961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.663977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.664069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.664083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.664166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.664180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.664279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.664293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.664454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.664468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.664563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.664577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.664731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.664746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.664848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.664864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.665010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.665025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.665195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.665210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.665306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.665322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.665430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.665444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.665609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.665623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.665772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.665787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.665897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.665911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.666077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.666092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.666188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.666203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.666287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.666301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.027 [2024-12-07 10:10:21.666402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.027 [2024-12-07 10:10:21.666419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.027 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.666570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.666585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.666739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.666753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.666930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.666946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.667119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.667133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.667288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.667303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.667393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.667407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.667499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.667514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.667595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.667609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.667826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.667841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.668067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.668083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.668176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.668190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.668362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.668376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.668475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.668490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.668599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.668613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.668786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.668801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.668907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.668922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.669039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.669054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.669164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.669180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.669278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.669305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.669397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.669409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.669549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.669560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.669635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.669645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.669739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.669750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.669859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.669870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.669962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.669974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.670051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.670062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.670173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.670205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.670345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.670376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.670576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.670609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.670811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.670843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.671059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.671093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.671226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.671258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.671451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.671461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.671574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.671584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.671655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.671665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.671830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.671841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.671924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.671936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.672095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.672106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.672260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.672301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.672542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.672581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.672703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.672734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.672861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.672893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.673036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.673070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.673271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.673302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.673514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.673525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.673664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.673675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.673815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.673825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.673972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.673985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.674166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.674177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.674261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.674272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.674361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.674371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.674489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.674520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.674652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.674684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.674805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.674838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.675013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.675046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.675216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.675226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.675311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.675322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.675411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.675422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.675516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.675526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.675661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.675694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.028 [2024-12-07 10:10:21.675825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.028 [2024-12-07 10:10:21.675857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.028 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.676063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.676096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.676226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.676259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.676387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.676397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.676552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.676564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.676717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.676727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.676819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.676830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.677015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.677049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.677244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.677277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.677401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.677431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.677613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.677624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.677720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.677747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.677841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.677852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.677952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.677963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.678135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.678146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.678242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.678252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.678331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.678341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.678485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.678497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.678576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.678587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.678684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.678703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.678863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.678874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.679038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.679049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.679164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.679175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.679325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.679336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.679515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.679526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.679608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.679618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.679751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.679783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.679986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.680019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.680157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.680189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.680319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.680330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.680420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.680431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.680582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.680615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.680739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.680770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.680986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.681019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.681156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.681187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.681338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.681368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.681512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.681523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.681627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.681638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.681717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.681727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.681801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.681812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.681966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.681978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.682074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.682085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.682296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.682307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.682386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.682396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.682501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.682511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.682585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.682595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.682749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.682760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.682900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.682911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.683014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.683025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.683104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.683115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.683211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.683222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.683307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.683318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.683406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.683417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.683575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.683586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.683659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.683669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.683765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.683777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.683879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.683910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.684057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.684091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.684216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.684248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.684382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.684423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.684519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.684529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.684636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.684647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.684755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.684766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.684840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.684850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.684930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.684942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.685042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.685054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.685150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.685182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.685368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.685401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.685524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.029 [2024-12-07 10:10:21.685556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.029 qpair failed and we were unable to recover it. 00:35:53.029 [2024-12-07 10:10:21.685689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.685723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.685801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.685812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.685955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.685966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.686066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.686076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.686177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.686188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.686283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.686293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.686444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.686454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.686517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.686527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.686614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.686625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.686784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.686794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.686939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.686955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.687153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.687185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.687319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.687350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.687495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.687536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.687615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.687625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.687687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.687697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.687781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.687792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbfc000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.688053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.688087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.688214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.688229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.688348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.688381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.688515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.688548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.688673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.688704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.688857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.688889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.689016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.689048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.689250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.689280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.689415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.689451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.689602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.689617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.689777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.689791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.689866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.689880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.689974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.689990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.690136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.690175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.690324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.690356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.690505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.690536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.690737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.690769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.690908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.690941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.691094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.691127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.691251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.691282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.691429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.691472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.691711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.691727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.691815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.691829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.692051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.692067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.692168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.692183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.692271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.692286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.692393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.692407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.692510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.692526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.692607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.692621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.692733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.692747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.692854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.692868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.692975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.692990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.693102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.693117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.693339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.693353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.693467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.693482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.693577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.693592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.693702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.693717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.693870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.693884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.693967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.693982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.694090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.694106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.694218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.694253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.694353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.694370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.694486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.694501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.694599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.694613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.694707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.694721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.694826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.694841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.695091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.695107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.695202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.695217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.030 qpair failed and we were unable to recover it. 00:35:53.030 [2024-12-07 10:10:21.695305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.030 [2024-12-07 10:10:21.695320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.695436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.695449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.695607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.695621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.695727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.695743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.695826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.695839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.695944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.695965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.696080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.696097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.696181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.696195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.696348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.696362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.696472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.696488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.696702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.696716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.696864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.696879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.696992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.697008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.697090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.697104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.697187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.697201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.697283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.697299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.697390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.697405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.697559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.697573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.697730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.697745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.697897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.697915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.698069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.698085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.698245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.698275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.698359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.698373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.698458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.698471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.698554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.698569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.698675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.698690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.698783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.698797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.698898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.698913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.699012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.699027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.699121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.699136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.699227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.699241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.699457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.699471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.699561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.699576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.699666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.699680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.699829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.699843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.699989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.700004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.700223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.700238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.700344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.700360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.700470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.700484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.700640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.700655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.700809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.700823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.700903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.700918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.701154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.701169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.701276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.701291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.701387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.701402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.701507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.701522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.701623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.701641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.701858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.701872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.702050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.702065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.702165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.702179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.702367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.702382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.702487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.702502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.702581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.702596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.702767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.702783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.702880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.702894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.702985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.703001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.703110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.703124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.703349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.703364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.703466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.703481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.703581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.703596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.703829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.703844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.704022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.704037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.704227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.704242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.704430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.704446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.704722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.704736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.704902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.704916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.705011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.705026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.031 [2024-12-07 10:10:21.705123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.031 [2024-12-07 10:10:21.705138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.031 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.705288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.705302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.705411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.705426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.705522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.705537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.705628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.705643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.705734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.705749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.705921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.705938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.706106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.706122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.706294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.706309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.706407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.706421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.706570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.706584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.706671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.706685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.706837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.706852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.707021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.707036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.707208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.707224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.707320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.707334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.707432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.707447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.707551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.707566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.707747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.707763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.707931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.707945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.708123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.708137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.708229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.708244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.708395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.708410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.708580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.708596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.708765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.708779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.708954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.708969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.709155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.709173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.709419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.709437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.709532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.709547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.709704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.709720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.709829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.709850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.709941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.709962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.710076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.710092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.710193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.710208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.710397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.710411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.710574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.710590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.710738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.710752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.710916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.710932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.711098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.711116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.711219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.711236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.711423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.711441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.711544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.711559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.711810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.711827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.712074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.712090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.712196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.712212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.712380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.712394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.712619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.712636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.712731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.712748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.712934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.712960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.032 [2024-12-07 10:10:21.713230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.032 [2024-12-07 10:10:21.713246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.032 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.713474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.713491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.713714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.713729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.713897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.713912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.714031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.714046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.714205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.714226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.714453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.714476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.714660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.714682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.714939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.714977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.715087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.715108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.715322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.715343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.715446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.715465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.715664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.715686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.715867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.715889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.716015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.716037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.716157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.716174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.716273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.716288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.716506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.716521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.716683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.716698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.716849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.716865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.717030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.717045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.717149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.717163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.717255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.717270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.717454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.717469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.717674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.717689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.717847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.717865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.718049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.718064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.718164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.718178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.718330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.718345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.718529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.718544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.718643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.718658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.718749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.718763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.718879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.718893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.719070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.719085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.719185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.719200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.719386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.719400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.719555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.719570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.719677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.719693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.719799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.719813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.719910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.719925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.720115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.720130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.720213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.720228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.720399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.720413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.720512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.720527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.720631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.720646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.720749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.720765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.720937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.720959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.721121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-12-07 10:10:21.721136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-12-07 10:10:21.721282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.721297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.721519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.721535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.721720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.721735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.721840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.721856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.722031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.722050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.722219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.722234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.722392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.722407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.722531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.722546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.722718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.722733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.722975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.722990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.723151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.723165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.723269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.723284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.723392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.723407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.723590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.723605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.723773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.723789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.723882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.723897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.724050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.724066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.724286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.724301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.724486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.724501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.724664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.724679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.724771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.724786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.724972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.724988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.725155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.725170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.725260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.725274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.725442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.725459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.725704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.725719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.725821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.725837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.725959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.725974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.726079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.726094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.726245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.726259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.726407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.726422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.726523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.726538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.726624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.726638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.726804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.726820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.726981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.726997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.727150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.727164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.727266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.727281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.727370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.727385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.727603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.727618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.727782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.727798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.727883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.727898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.728056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.728071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.728177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.728192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.728337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.728352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.728593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.728608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.728789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.728820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.728937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.728959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.729041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.729056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.729150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.729166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.729415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.729430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.729644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.729660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.729822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.729836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.730009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.730025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.730114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.730129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.730246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.730263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.730436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.730451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.730606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.730620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.730740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.730754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.730925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.730944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.731115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.731129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.731315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.731329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.731489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.731504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.731608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.731622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.731783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.731798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.731891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.731906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.732122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.732137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.732237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.316 [2024-12-07 10:10:21.732251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.316 qpair failed and we were unable to recover it. 00:35:53.316 [2024-12-07 10:10:21.732357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.732371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.732527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.732540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.732699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.732714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.732810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.732824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.732976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.732992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.733206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.733221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.733310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.733324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.733474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.733488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.733671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.733685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.733845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.733860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.733959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.733974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.734148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.734162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.734268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.734283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.734384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.734399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.734487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.734501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.734729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.734744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.734832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.734847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.734953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.734968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.735156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.735172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.735251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.735265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.735433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.735447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.735546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.735560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.735642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.735656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.735807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.735822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.735992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.736008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.736112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.736127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.736228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.736243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.736430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.736445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.736547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.736562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.736718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.736733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.736828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.736842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.736951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.736969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.737119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.737140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.737257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.737271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.737363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.737378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.737474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.737489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.737679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.737694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.737845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.737860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.737964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.737978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.738074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.738089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.738264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.738279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.738368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.738383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.738469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.738484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.738652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.738667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.738840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.738854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.738986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.739002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.739080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.739094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.739261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.739275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.739362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.739376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.739458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.739473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.739559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.739574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.739842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.739857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.740114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.740130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.740374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.740388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.740539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.740554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.740823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.740838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.317 [2024-12-07 10:10:21.741054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.317 [2024-12-07 10:10:21.741069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.317 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.741231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.741246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.741345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.741372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.741565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.741586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.741749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.741763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.741857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.741872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.742129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.742147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.742352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.742367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.742464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.742479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.742629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.742644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.742814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.742830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.742920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.742934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.743037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.743053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.743211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.743226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.743401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.743416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.743534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.743551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.743667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.743683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.743792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.743806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.743914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.743930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.744128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.744145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.744243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.744258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.744406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.744420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.744522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.744537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.744645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.744660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.744816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.744832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.745050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.745067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.745169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.745185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.745284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.745299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.745469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.745485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.745590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.745609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.745696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.745711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.745868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.745883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.746110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.746127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.746236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.746250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.746515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.746530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.746702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.746716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.746870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.746886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.747067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.747085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.747202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.747218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.747385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.747400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.747566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.747581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.747677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.747692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.747796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.747811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.747966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.747981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.748151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.748166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.748277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.748291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.748398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.748413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.748571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.748588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.748755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.748770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.748867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.748882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.749037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.749054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.749163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.749177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.749400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.749414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.749511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.749526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.749686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.749700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.749856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.749871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.749966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.749987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.750078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.750093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.750193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.750208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.750368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.750383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.750625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.750640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.750802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.750817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.750975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.750991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.751207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.751221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.751309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.318 [2024-12-07 10:10:21.751324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.318 qpair failed and we were unable to recover it. 00:35:53.318 [2024-12-07 10:10:21.751420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.751434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.751584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.751598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.751715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.751732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.751921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.751936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.752035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.752050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.752174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.752189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.752290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.752304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.752412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.752427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.752525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.752541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.752639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.752653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.752807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.752822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.753042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.753058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.753211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.753226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.753329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.753344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.753450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.753464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.753539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.753555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.753647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.753662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.753765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.753780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.753874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.753891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.754041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.754055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.754213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.754229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.754384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.754399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.754575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.754590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.754783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.754798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.754892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.754906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.755100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.755115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.755242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.755257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.755347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.755362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.755523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.755537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.755712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.755727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.755880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.755895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.756064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.756080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.756191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.756206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.756380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.756395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.756593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.756610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.756802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.756817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.756997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.757016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.757178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.757193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.757373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.757388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.757483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.757498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.757649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.757671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.757845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.757861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.758020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.758037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.758192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.758207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.758311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.758326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.758476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.758494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.758737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.758752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.758924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.758939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.759117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.759132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.759350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.759365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.759477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.759492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.759609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.759624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.759729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.759744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.759935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.759956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.760053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.760068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.760153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.760169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.760342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.760357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.760523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.760539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.760759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.760773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.760936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.760960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.761123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.761143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.761307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.761322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.761493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.761507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.761609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.761625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.761791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.761806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.762022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.762038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.762257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.319 [2024-12-07 10:10:21.762272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.319 qpair failed and we were unable to recover it. 00:35:53.319 [2024-12-07 10:10:21.762425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.762440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.762532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.762546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.762640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.762654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.762844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.762859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.763034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.763049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.763220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.763234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.763334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.763350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.763506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.763521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.763759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.763774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.763954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.763969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.764069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.764084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.764193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.764208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.764373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.764388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.764587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.764602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.764788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.764804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.764902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.764916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.765028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.765044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.765155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.765170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.765401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.765415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.765573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.765592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.765677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.765692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.765856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.765871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.766028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.766044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.766139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.766154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.766347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.766362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.766510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.766525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.766690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.766705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.766796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.766812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.766916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.766931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.767048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.767063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.767146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.767160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.767274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.767289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.767446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.767461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.767629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.767644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.767809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.767824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.768006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.768021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.768109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.768125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.768226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.768241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.768405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.768419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.768663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.768677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.768843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.768858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.768967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.768982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.769166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.769181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.769289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.769304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.769476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.769491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.769731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.769746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.769844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.769862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.769960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.769974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.770120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.770135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.770234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.770249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.770355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.770369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.770476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.770490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.770629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.770644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.770883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.770898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.770994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.771010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.771171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.771186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.771380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.771395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.771503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.771518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.771615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.771629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.771868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.771883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.771993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.772009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.772110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.772124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.772212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.320 [2024-12-07 10:10:21.772226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.320 qpair failed and we were unable to recover it. 00:35:53.320 [2024-12-07 10:10:21.772443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.772458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.772570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.772584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.772765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.772779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.773004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.773020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.773123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.773138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.773307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.773322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.773426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.773441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.773548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.773564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.773660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.773674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.773837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.773852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.773969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.773997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.774095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.774110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.774359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.774373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.774542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.774558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.774646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.774660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.774838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.774852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.774958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.774974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.775075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.775090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.775333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.775347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.775580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.775595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.775700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.775715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.775866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.775881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.775990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.776005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.776085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.776100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.776211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.776235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.776414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.776432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.776608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.776624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.776730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.776746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.776848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.776864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.777026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.777044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.777200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.777216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.777302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.777318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.777418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.777433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.777519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.777533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.777634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.777650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.777765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.777781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.777876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.777889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.778043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.778062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.778228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.778243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.778403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.778417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.778589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.778603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.778796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.778811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.778980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.778996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.779100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.779115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.779276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.779292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.779517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.779531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.779682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.779697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.779783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.779797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.779958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.779973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.780140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.780156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.780259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.780275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.780396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.780410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.780486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.780501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.780604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.780620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.780707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.780722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.780808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.780823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.780930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.780944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.781099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.781114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.781206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.781220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.781387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.781402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.781496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.781510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.781658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.781673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.781774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.781788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.781962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.781979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.782135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.782150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.782265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.782279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.782371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.782386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.782479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.782494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.782564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.782578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.782718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.782732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.782825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.321 [2024-12-07 10:10:21.782840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.321 qpair failed and we were unable to recover it. 00:35:53.321 [2024-12-07 10:10:21.783086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.783104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.783335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.783351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.783578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.783595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.783856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.783870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.784065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.784081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.784202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.784217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.784387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.784405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.784509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.784526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.784623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.784638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.784802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.784818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.784973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.784988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.785076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.785091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.785176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.785191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.785287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.785302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.785400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.785414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.785570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.785585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.785741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.785755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.785863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.785879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.785964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.785978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.786197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.786212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.786436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.786453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.786553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.786568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.786673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.786688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.786876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.786892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.787056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.787071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.787242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.787257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.787371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.787387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.787488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.787503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.787678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.787694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.787796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.787811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.788051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.788067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.788173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.788188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.788309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.788323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.788443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.788460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.788631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.788646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.788804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.788819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.788981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.788997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.789142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.789157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.789259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.789275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.789385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.789400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.789507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.789521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.789645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.789660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.789838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.789852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.789959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.789973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.790193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.790209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.790298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.790314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.790417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.790432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.790553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.790567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.790721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.790736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.790916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.790930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.791044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.791059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.791205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.791220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.791300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.791314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.791405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.791419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.791583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.791597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.791811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.791825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.792098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.792114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.792268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.792282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.792461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.792475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.792660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.792675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.792898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.792915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.793091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.793106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.793199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.793216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.322 [2024-12-07 10:10:21.793442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.322 [2024-12-07 10:10:21.793456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.322 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.793578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.793592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.793698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.793712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.793871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.793886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.794056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.794071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.794225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.794239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.794337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.794353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.794570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.794587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.794677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.794691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.794960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.794976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.795145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.795164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.795366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.795382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.795493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.795507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.795670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.795685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.795858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.795874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.795991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.796006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.796160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.796176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.796364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.796379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.796481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.796496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.796762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.796776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.796882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.796897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.797002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.797016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.797101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.797115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.797215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.797229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.797413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.797427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.797541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.797555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.797808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.797823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.797899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.797913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.798006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.798021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.798192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.798207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.798361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.798374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.798517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.798537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.798642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.798657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.798838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.798853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.798945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.798967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.799132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.799147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.799307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.799321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.799485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.799500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.799663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.799679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.799792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.799807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.800026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.800043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.800217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.800233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.800380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.800394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.800489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.800505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.800611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.800625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.800795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.800809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.800973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.800988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.801098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.801113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.801280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.801295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.801403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.801418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.801584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.801604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.801701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.801715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.801903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.801917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.802007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.802023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.802119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.802133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.802279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.802293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.802399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.802413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.802561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.802577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.802741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.802755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.802975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.802990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.803097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.803111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.803210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.803224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.323 [2024-12-07 10:10:21.803384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.323 [2024-12-07 10:10:21.803399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.323 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.803491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.803505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.803661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.803676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.803833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.803848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.803962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.803977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.804081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.804094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.804277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.804291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.804381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.804396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.804563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.804578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.804805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.804820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.804969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.804984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.805203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.805217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.805331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.805346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.805448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.805462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.805641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.805657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.805779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.805795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.806068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.806085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.806271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.806285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.806434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.806448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.806631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.806645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.806868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.806883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.806973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.806989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.807156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.807171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.807319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.807334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.807439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.807454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.807564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.807578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.807684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.807698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.807797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.807811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.807921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.807939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.808113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.808131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.808305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.808320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.808424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.808438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.808662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.808676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.808843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.808858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.808958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.808973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.809058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.809071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.809178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.809192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.809278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.809292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.809529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.809544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.809638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.809652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.809807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.809821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.809915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.809929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.810023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.810038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.810207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.810222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.810325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.810340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.810476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.810491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.810583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.810599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.810823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.810837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.811000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.811014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.811113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.811127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.811227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.811246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.811348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.811362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.811534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.811549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.811703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.811717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.811834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.811849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.812092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.812108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.812312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.812326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.812429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.812444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.812552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.812567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.813412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.813442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.813702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.813719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.813829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.813843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.814023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.814039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.814207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.814222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.814417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.814433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.814600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.814614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.324 qpair failed and we were unable to recover it. 00:35:53.324 [2024-12-07 10:10:21.814777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.324 [2024-12-07 10:10:21.814792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.814943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.814971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.815073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.815091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.815191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.815205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.815366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.815379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.815492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.815506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.815609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.815624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.815777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.815792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.815943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.815964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.816127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.816141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.816311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.816327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.816439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.816453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.816670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.816685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.816859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.816874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.816986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.817000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.817158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.817172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.817291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.817306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.817397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.817411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.817522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.817540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.817705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.817719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.817814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.817829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.817932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.817946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.818076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.818091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.818252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.818266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.818356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.818371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.818548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.818563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.818641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.818655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.818808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.818822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.818935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.818954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.819051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.819066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.819164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.819178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.819274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.819288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.819473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.819488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.819688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.819703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.819856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.819870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.819964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.819978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.820167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.820182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.820332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.820347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.820440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.820454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.820655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.820671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.820753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.820767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.820924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.820938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.821052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.821069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.821327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.821341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.821441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.821456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.821541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.821555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.821657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.821672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.821754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.821767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.821916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.821930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.822031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.822047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.822202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.822217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.822375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.822390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.822566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.822582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.822697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.822711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.822820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.822835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.823007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.823021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.823243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.823257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.823369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.823384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.823562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.823578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.325 [2024-12-07 10:10:21.823661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.325 [2024-12-07 10:10:21.823675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.325 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.823826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.823842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.824020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.824035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.824145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.824160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.824275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.824289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.824448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.824462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.824638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.824653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.824758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.824772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.824945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.824966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.825079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.825096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.825253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.825268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.825418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.825433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.825583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.825596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.825718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.825732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.825820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.825835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.825984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.825999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.826096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.826111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.826202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.826217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.826398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.826412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.826583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.826598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.826695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.826709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.826823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.826836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.826931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.826951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.827138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.827155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.827311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.827325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.827545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.827561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.827691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.827705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.827808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.827823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.827914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.827928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.828100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.828114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.828298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.828314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.828534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.828551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.828650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.828665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.828757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.828775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.828979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.828995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.829162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.829177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.829275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.829290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.829441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.829458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.829560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.829575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.829737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.829753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.829853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.829870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.829971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.829986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.830132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.830148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.830385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.830401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.830501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.830515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.830664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.830679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.830829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.830844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.831111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.831127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.831304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.831319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.831505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.831520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.831633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.831648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.831731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.831746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.831841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.831857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.832016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.832031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.832122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.832136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.832240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.832254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.832363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.832379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.832541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.832555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.832711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.832729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.832882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.832899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.832984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.832998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.833183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.833199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.833362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.833377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.833536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.833559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.833739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.833754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.833856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.833870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.833963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.833978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.834079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.326 [2024-12-07 10:10:21.834093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.326 qpair failed and we were unable to recover it. 00:35:53.326 [2024-12-07 10:10:21.834178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.834193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.834350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.834365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.834563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.834580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.834671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.834685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.834790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.834804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.834959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.834975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.835067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.835081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.835177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.835192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.835358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.835373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.835480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.835497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.835671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.835687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.835785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.835800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.835958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.835973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.836054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.836067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.836171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.836184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.836265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.836279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.836383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.836396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.836493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.836508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.836617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.836632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.836726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.836741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.836847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.836862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.836977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.836994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.837115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.837150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.837254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.837270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.837358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.837374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.837548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.837563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.837658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.837674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.837917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.837932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.838084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.838100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.838198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.838224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.838385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.838400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.838494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.838509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.838660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.838674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.838763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.838778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.838952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.838968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.839130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.839146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.839253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.839268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.839420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.839435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.839525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.839540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.839633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.839647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.839738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.839752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.839862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.839878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.839977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.839992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.840080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.840095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.840251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.840279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.840380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.840394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.840617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.840632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.840784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.840798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.840893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.840908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.841104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.841123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.841248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.841263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.841367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.841381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.841486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.841502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.841728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.841745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.841839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.841852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.841958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.841973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.842073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.842089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.842256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.842271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.842340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.842355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.842451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.842465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.842554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.842569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.842673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.842686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.842835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.842851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.843018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.843034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.843123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.843137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.843226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.843241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.843340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.843355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.843470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.843484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.843631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.843647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.327 [2024-12-07 10:10:21.843746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.327 [2024-12-07 10:10:21.843760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.327 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.843915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.843929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.844027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.844043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.844139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.844152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.844256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.844270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.844362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.844377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.844460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.844475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.844565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.844580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.844737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.844751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.844898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.844914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.845008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.845023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.845107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.845122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.845229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.845244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.845397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.845412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.845503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.845518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.845685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.845701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.845807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.845822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.845983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.845998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.846158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.846173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.846269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.846283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.846438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.846456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.846551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.846565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.846713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.846728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.846823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.846837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.847016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.847032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.847259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.847275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.847457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.847471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.847628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.847643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.847740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.847754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.847852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.847866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.847954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.847969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.848133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.848148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.848259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.848275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.848364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.848378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.848493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.848508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.848607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.848623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.848856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.848872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.848977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.848993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.849097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.849112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.849196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.849210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.849364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.849380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.849492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.849506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.849595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.849609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.849789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.849804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.849887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.849901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.849988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.850002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.850154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.850168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.850269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.850284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.850379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.850394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.850491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.850506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.850743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.850758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.850870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.850884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.851043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.851058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.851178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.851192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.851278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.851292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.851378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.851393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.851564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.328 [2024-12-07 10:10:21.851578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.328 qpair failed and we were unable to recover it. 00:35:53.328 [2024-12-07 10:10:21.851732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.851747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.851832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.851846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.852012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.852028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.852120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.852140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.852320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.852335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.852486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.852502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.852659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.852673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.852756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.852772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.852873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.852888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.852989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.853005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.853095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.853109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.853204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.853219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.853303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.853317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.853477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.853492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.853645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.853659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.853741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.853756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.853848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.853862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.854041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.854057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.854149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.854164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.854243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.854258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.854355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.854369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.854461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.854476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.854574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.854589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.854682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.854697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.854809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.854823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.854904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.854919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.855027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.855044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.855151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.855168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.855257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.855271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.855361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.855375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.855469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.855484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.855579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.855593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.855748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.855763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.855870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.855884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.855977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.855994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.856078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.856094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.856198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.856212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.856309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.856323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.856435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.856450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.856604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.856618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.856771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.856784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.856906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.856920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.857014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.857028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.857094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.857111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.857208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.857223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.857320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.857334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.857414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.857429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.857584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.857598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.857685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.857701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.857768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.857782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.858000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.858015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.858110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.858125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.858290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.858305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.858413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.858428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.858648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.858662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.858754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.858769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.858935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.858953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.859070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.859085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.859276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.859292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.859391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.859405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.859561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.859576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.859673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.859687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.859904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.859920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.860009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.860024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.860195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.860210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.860359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.860373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.860457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.860472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.860583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.860597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.860684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.860699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.860894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.860908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.861014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.329 [2024-12-07 10:10:21.861031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.329 qpair failed and we were unable to recover it. 00:35:53.329 [2024-12-07 10:10:21.861274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.861289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.861374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.861390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.861562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.861576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.861683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.861698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.861782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.861797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.861900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.861914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.862020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.862038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.862249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.862264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.862365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.862380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.862545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.862559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.862661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.862677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.862775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.862789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.862941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.862965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.863067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.863082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.863312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.863326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.863415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.863430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.863513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.863528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.863607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.863621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.863770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.863785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.863881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.863896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.864083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.864097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.864198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.864214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.864435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.864451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.864564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.864578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.864675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.864691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.864792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.864806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.864914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.864929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.865062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.865078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.865228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.865243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.865461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.865475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.865635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.865649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.865741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.865755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.866007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.866023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.866181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.866196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.866351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.866366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.866531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.866545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.866695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.866710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.866823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.866837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.867023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.867037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.867136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.867150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.867247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.867261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.867348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.867362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.867608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.867628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.867801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.867816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.867919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.867934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.868052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.868066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.868227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.868241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.868402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.868416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.868587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.868601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.868822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.868836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.868991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.869006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.869202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.869218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.869394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.869412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.869538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.869554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.869717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.869732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.869829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.869844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.869929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.869944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.870053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.870067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.870153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.870167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.870242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.870256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.870426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.870441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.870555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.870569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.870720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.870737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.870918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.870933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.871057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.871092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.871189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.871206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.871370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.330 [2024-12-07 10:10:21.871386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.330 qpair failed and we were unable to recover it. 00:35:53.330 [2024-12-07 10:10:21.871553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.871568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.871761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.871776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.871892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.871908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.872009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.872023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.872200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.872215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.872313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.872327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.872485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.872501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.872649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.872665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.872854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.872869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.873099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.873114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.873266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.873281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.873380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.873395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.873491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.873507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.873600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.873614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.873772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.873787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.874012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.874028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.874127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.874142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.874305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.874322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.874481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.874497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.874671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.874685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.874855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.874871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.874984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.874999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.875247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.875261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.875432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.875447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.875600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.875615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.875785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.875802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.875982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.875997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.876253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.876268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.876497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.876512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.876742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.876757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.877003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.877018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.877128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.877143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.877299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.877315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.877498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.877513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.877663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.877677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.877821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.877836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.878026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.878042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.878141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.878156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.878362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.878378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.878545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.878559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.878650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.878665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.878834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.878849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.878954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.878969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.879119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.879133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.879307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.879321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.879485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.879501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.879586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.879601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.879702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.879717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.879872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.879886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.879984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.880000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.880219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.880235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.880333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.880348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.880443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.880457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.880618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.880633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.880732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.880746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.880896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.880911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.881074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.881089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.881196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.881211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.881393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.881407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.881520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.881534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.881640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.881656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.881869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.881884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.881980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.881995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.882095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.882110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.882282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.882311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.882487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.882506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.882604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.882618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.882711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.882726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.882911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.882926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.883095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.883110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.883294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.883308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.883477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.883493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.331 [2024-12-07 10:10:21.883649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.331 [2024-12-07 10:10:21.883665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.331 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.883908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.883923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.884097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.884113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.884274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.884289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.884503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.884518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.884669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.884683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.884852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.884867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.885037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.885052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.885237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.885252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.885407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.885422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.885586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.885601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.885687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.885701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.885802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.885816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.885906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.885920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.886034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.886049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.886210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.886224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.886444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.886458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.886700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.886715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.886882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.886897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.887002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.887017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.887108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.887123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.887294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.887309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.887464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.887479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.887704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.887719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.887865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.887880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.888075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.888090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.888179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.888193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.888426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.888442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.888545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.888560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.888734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.888754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.888935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.888959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.889134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.889154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.889331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.889350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.889461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.889488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.889584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.889603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.889715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.889730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.889893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.889908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.889993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.890008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.890170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.890185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.890352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.890367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.890463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.890477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.890564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.890578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.890728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.890743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.890897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.890911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.891004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.891020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.891209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.891228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.891451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.891466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.891630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.891644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.891809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.891824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.892056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.892072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.892181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.892196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.892359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.892374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.892473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.892488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.892588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.892603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.892767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.892782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.892865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.892880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.892993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.893008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.893174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.893190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.893342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.893356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.893464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.893478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.893579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.893600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.893757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.893773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.893894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.893909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.894078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.894093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.894187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.894201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.894299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.894313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.894482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.894496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.894656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.894670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.894871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.894885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.895060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.332 [2024-12-07 10:10:21.895075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.332 qpair failed and we were unable to recover it. 00:35:53.332 [2024-12-07 10:10:21.895284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.895298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.895498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.895512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.895612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.895625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.895736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.895754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.895954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.895969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.896159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.896174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.896266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.896282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.896464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.896479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.896703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.896717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.896816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.896831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.896957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.896972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.897072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.897087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.897191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.897205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.897370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.897385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.897483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.897498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.897590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.897605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.897794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.897810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.898039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.898065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.898229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.898244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.898331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.898345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.898451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.898466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.898544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.898558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.898776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.898790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.898972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.898999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.899104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.899119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.899287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.899302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.899402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.899417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.899569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.899583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.899757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.899771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.899889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.899903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.900081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.900108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.900282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.900299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.900385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.900400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.900628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.900643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.900831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.900846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.901018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.901036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.901138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.901152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.901333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.901349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.901462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.901476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.901596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.901611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.901713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.901729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.901912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.901927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.902019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.902034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.902204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.902219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.902389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.902404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.902563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.902584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.902695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.902710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.902864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.902880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.903116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.903132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.903238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.903253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.903489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.903503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.903603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.903618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.903717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.903732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.903812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.903826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.904064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.904080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.904189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.904204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.904307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.904321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.904425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.904444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.904621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.904636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.904785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.904799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.905045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.905060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.905217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.905232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.905401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.905416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.905571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.905586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.333 [2024-12-07 10:10:21.905689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.333 [2024-12-07 10:10:21.905703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.333 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.905936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.905957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.906128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.906144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.906337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.906352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.906455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.906470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.906566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.906580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.906764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.906779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.906896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.906911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.907083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.907099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.907331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.907347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.907500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.907516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.907602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.907624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.907808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.907824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.907910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.907926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.908101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.908116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.908284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.908299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.908477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.908492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.908577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.908592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.908736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.908751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.908827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.908842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.909063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.909084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.909211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.909226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.909322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.909337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.909598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.909612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.909835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.909850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.910124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.910140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.910315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.910330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.910594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.910610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.910807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.910823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.910927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.910942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.911143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.911158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.911382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.911397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.911569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.911583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.911686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.911701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.911893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.911910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.912082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.912097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.912277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.912293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.912413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.912427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.912514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.912528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.912747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.912762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.912915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.912931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.913123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.913139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.913325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.913341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.913516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.913533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.913634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.913649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.913801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.913815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.914003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.914019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.914181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.914199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.914304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.914320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.914412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.914427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.914588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.914602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.914831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.914846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.914998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.915018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.915134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.915149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.915234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.915248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.915339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.915354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.915449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.915464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.915560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.915575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.915665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.915679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.915768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.915783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.915885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.915899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.916001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.916018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.916203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.916241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.916494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.916511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.916662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.916676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.916852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.916866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.916989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.917003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.917222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.917236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.917350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.917365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.917532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.917547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.917715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.334 [2024-12-07 10:10:21.917730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.334 qpair failed and we were unable to recover it. 00:35:53.334 [2024-12-07 10:10:21.917836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.917850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.918051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.918067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.918231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.918246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.918408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.918427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.918540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.918554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.918710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.918725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.918820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.918834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.918912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.918926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.919107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.919121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.919289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.919304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.919421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.919437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.919546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.919563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.919717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.919732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.919887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.919902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.919993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.920008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.920174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.920188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.920349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.920364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.920460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.920474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.920678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.920693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.920866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.920880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.921108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.921125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.921240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.921254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.921416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.921433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.921544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.921559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.921720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.921734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.921959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.921974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.922066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.922080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.922205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.922220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.922392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.922407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.922508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.922524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.922608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.922626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.922736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.922750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.922998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.923014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.923164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.923179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.923287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.923301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.923385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.923401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.923642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.923656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.923758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.923773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.923871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.923887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.923997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.924011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.924117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.924132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.924229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.924244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.924472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.924487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.924660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.924678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.924784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.924799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.924987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.925002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.925163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.925177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.925434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.925448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.925640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.925654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.925875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.925890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.926015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.926030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.926185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.926199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.926298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.926313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.926474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.926488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.926707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.926721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.926909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.926924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.927036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.927051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.927218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.927233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.927403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.927418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.927635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.927650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.927811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.927825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.928004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.928018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.335 [2024-12-07 10:10:21.928198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.335 [2024-12-07 10:10:21.928213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.335 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.928327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.928342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.928577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.928592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.928754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.928769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.928931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.928946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.929070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.929085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.929285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.929300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.929473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.929487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.929581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.929596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.929815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.929830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.930008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.930024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.930116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.930131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.930360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.930376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.930477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.930492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.930585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.930600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.930757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.930771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.930870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.930885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.931040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.931056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.931204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.931220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.931436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.931451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.931555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.931570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.931731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.931749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.931847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.931861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.932026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.932041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.932130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.932145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.932320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.932334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.932426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.932441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.932553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.932568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.932768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.932784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.932881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.932897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.933008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.933022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.933121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.933141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.933241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.933255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.933412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.933427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.933537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.933552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.933714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.933728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.933840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.933855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.934070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.934085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.934250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.934264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.934357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.934372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.934471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.934485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.934588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.934604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.934768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.934783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.934960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.934975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.935167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.935181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.935266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.935281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.935378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.935393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.935478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.935493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.935615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.935637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.935730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.935750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.935852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.935867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.935985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.936000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.936089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.936103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.936179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.936193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.936411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.936427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.936593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.936607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.936718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.936732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.936842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.936857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.936958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.936972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.937062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.937077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.937226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.937240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.937343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.937362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.937585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.937601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.937697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.937711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.937860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.937875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.937987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.938002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.938084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.938099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.336 [2024-12-07 10:10:21.938276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.336 [2024-12-07 10:10:21.938291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.336 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.938442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.938457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.938570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.938585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.938854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.938867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.938971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.938986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.939071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.939086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.939178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.939193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.939370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.939385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.939581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.939596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.939713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.939727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.939847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.939862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.940027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.940042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.940153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.940167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.940321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.940336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.940453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.940468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.940707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.940722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.940903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.940918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.941090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.941105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.941352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.941367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.941533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.941548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.941648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.941663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.941762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.941780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.941958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.941974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.942136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.942150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.942317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.942332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.942427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.942442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.942544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.942559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.942718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.942733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.942957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.942973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.943097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.943111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.943210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.943224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.943375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.943390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.943492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.943506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.943730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.943745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.943983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.943999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.944245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.944260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.944365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.944379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.944624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.944639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.944789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.944803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.944891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.944905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.945126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.945141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.945317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.945331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.945425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.945439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.945548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.945562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.945705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.945719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.945968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.945983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.946097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.946112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.946273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.946288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.946453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.946470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.946648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.946664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.946762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.946776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.946913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.946927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.947096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.947111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.947210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.947225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.947369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.947384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.947540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.947555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.947708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.947722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.947796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.947815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.947937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.947957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.948131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.948146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.948219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.948234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.948399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.948414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.948567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.948582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.948665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.948680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.948842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.948858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.949024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.949040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.949188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.949202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.949356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.949371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.949450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.949465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.949734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.949749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.337 qpair failed and we were unable to recover it. 00:35:53.337 [2024-12-07 10:10:21.949922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.337 [2024-12-07 10:10:21.949937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.950042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.950057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.950148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.950163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.950331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.950345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.950512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.950527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.950687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.950705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.950814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.950829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.950991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.951006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.951090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.951105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.951187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.951201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.951290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.951304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.951445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.951459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.951620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.951635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.951796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.951811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.951902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.951917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.952076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.952091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.952192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.952215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.952382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.952397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.952566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.952581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.952742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.952757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.952849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.952863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.952974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.952990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.953159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.953174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.953335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.953350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.953575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.953590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.953818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.953833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.953989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.954005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.954216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.954230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.954395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.954410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.954503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.954517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.954670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.954685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.954834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.954848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.955042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.955057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.955218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.955233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.955360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.955375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.955477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.955492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.955586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.955600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.955779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.955794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.955879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.955893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.955993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.956008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.956186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.956204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.956292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.956306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.956477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.956491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.956584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.956599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.956844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.956859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.957008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.957024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.957219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.957236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.957429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.957444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.957537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.957552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.957655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.957670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.957770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.957785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.957906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.957921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.958098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.958113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.958195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.958210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.958474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.958489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.958706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.958721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.958935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.958954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.959174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.959189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.959341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.959356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.959512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.959530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.959698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.959713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.338 [2024-12-07 10:10:21.959887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.338 [2024-12-07 10:10:21.959902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.338 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.960086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.960102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.960220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.960234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.960399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.960415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.960601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.960615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.960832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.960847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.961082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.961098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.961266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.961282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.961529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.961544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.961646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.961661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.961826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.961841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.962057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.962073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.962297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.962312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.962552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.962567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.962809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.962824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.962955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.962971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.963214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.963228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.963470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.963485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.963703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.963719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.963886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.963900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.964074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.964090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.964250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.964265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.964524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.964540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.964697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.964711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.964961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.964976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.965139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.965160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.965267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.965282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.965470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.965485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.965612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.965627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.965798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.965813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.965972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.965987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.966176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.966190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.966267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.966281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.966428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.966443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.966610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.966625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.966730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.966745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.966860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.966875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.967028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.967044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.967263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.967281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.967517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.967531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.967716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.967732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.967827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.967842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.968016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.968030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.968194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.968209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.968317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.968332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.968444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.968459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.968621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.968636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.968733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.968748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.968916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.968930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.969084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.969098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.969287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.969302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.969371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.969386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.969555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.969569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.969720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.969735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.969828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.969843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.970026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.970041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.970150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.970165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.970341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.970356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.970505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.970521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.970679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.970697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.970868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.970884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.971059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.971074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.971227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.971243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.971337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.971351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.971459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.971473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.971578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.971593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.971679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.971694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.971792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.971807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.971906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.339 [2024-12-07 10:10:21.971920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-12-07 10:10:21.971992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.972008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.972169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.972184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.972358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.972373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.972471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.972485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.972637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.972651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.972837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.972851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.972999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.973014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.973110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.973124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.973279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.973293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.973378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.973399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.973552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.973567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.973663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.973678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.973828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.973842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.973945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.973966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.974149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.974164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.974326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.974341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.974554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.974568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.974813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.974829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.974932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.974952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.975045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.975060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.975161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.975176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.975415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.975429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.975652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.975666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.975862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.975877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.976121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.976137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.976265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.976279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.976449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.976465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.976629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.976644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.976895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.976911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.977077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.977092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.977207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.977223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.977325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.977340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.977451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.977466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.977710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.977724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.977909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.977923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.978214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.978230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.978500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.978519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.978721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.978736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.978849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.978863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.979046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.979061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.979167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.979182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.979345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.979359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.979530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.979544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.979717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.979732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.979984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.980000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.980163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.980178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.980347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.980361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.980525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.980539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.980687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.980701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.980881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.980897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.981155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.981173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.981429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.981445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.981653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.981668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.981840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.981854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.982098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.982115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.982355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.982371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.982519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.982535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.982703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.982718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.982900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.982914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.983131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.983147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.983329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.983344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.983506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.983520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.983705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.983719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.983907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.983928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.984157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.984172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.984394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.984409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.984601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.984615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.984864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.984879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.985036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.985052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-12-07 10:10:21.985220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.340 [2024-12-07 10:10:21.985234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.985405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.985420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.985611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.985626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.985853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.985869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.986060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.986075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.986277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.986291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.986491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.986507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.986618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.986633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.986829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.986844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.987080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.987096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.987276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.987292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.987537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.987552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.987781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.987796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.988048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.988063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.988280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.988295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.988448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.988462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.988693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.988708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.988864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.988880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.989045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.989060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.989248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.989262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.989524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.989538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.989727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.989745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.989912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.989930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.990160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.990176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.990293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.990310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.990551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.990566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.990716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.990732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.990969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.990984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.991228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.991244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.991411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.991428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.991583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.991598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.991769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.991785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.992058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.992086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.992190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.341 [2024-12-07 10:10:21.992204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-12-07 10:10:21.992356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.992371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.992618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.992635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.992806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.992822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.993040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.993057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.993277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.993293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.993472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.993487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.993660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.993675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.993910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.993925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.994103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.994118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.994333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.994349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.994447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.994463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.994630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.994646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.994811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.994825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.995044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.995061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.995277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.995292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.995570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.995584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.995679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.995694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.995955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.995971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.996190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.996205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.996462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.996478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.996693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.996708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.996925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.996940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.997218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.997234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.997467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.997481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.997630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.997645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.997754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.997770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.997943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.997963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.998221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.998235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.998412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.998428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.998687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.998702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.998867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.998882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.999122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.999138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.999324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.999339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.999555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.999569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:21.999815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:21.999831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:22.000048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:22.000063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:22.000244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.342 [2024-12-07 10:10:22.000259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.342 qpair failed and we were unable to recover it. 00:35:53.342 [2024-12-07 10:10:22.000483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.000497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.000686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.000700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.000922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.000937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.001162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.001177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.001405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.001420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.001620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.001635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.001878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.001893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.002067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.002082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.002176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.002191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.002376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.002391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.002548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.002562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.002794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.002809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.003057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.003073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.003262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.003277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.003517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.003533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.003776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.003792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.003959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.003975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.004153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.004169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.004417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.004436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.004682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.004697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.004932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.004953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.005149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.005163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.005379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.005393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.005557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.005571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.005724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.005739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.005959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.005974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.006176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.006191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.006363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.006378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.006589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.006603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.006819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.006835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.006995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.007010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.007229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.007244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.007485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.007499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.007691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.007705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.007938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.007969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.008162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.008178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.008413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.008428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.008684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.343 [2024-12-07 10:10:22.008698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.343 qpair failed and we were unable to recover it. 00:35:53.343 [2024-12-07 10:10:22.008938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.008959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.009206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.009222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.009450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.009465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.009682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.009697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.009810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.009827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.009943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.009967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.010128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.010145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.010384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.010403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.010589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.010611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.010811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.010827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.010929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.010944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.011037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.011052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.011210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.011225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.011440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.011454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.011668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.011683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.011909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.011926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.012117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.012135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.012382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.012398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.012579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.012596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.012843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.012859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.013053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.013069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.013308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.013322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.013513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.013529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.013641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.013658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.013926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.344 [2024-12-07 10:10:22.013943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.344 qpair failed and we were unable to recover it. 00:35:53.344 [2024-12-07 10:10:22.014152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.014170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.014341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.014357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.014600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.014615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.014880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.014896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.015119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.015135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.015287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.015302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.015415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.015438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.015711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.015735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.015920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.015943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.016212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.016240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.016402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.016425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.016673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.016695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.016941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.016973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.017236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-12-07 10:10:22.017258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-12-07 10:10:22.017439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.017460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.017719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.017737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.017890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.017906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.018166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.018183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.018403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.018419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.018568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.018583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.018755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.018771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.019014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.019030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.019206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.019222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.019384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.019414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.019670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.019687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.019924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.019939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.020186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.020201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.020406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.020421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.020600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.020614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.020835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.020850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.021021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.021036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.021257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.021273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.021371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.021386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.021627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.021643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.021900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.021914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.022079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.022095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.022259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.022278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.022557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.022572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.022811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.022827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.022985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.023002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.023102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.023118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.023218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.023233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.023403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.023419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.023636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.023651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.023871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.023886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.024064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.024081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.024324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.024339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.024557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.024572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.024810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.024827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.024994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-12-07 10:10:22.025010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-12-07 10:10:22.025254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.025270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.025518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.025533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.025690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.025705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.025802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.025817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.026037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.026053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.026225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.026239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.026465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.026480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.026657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.026673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.026824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.026840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.026954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.026969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.027063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.027078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.027186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.027201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.027303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.027318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.027497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.027512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.027731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.027747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.027860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.027876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.028029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.028045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.028271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.028287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.028392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.028407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.028645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.028660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.028836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.028852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.029037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.029053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.029234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.029250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.029407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.029422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.029611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.029627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.029751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.029766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.030009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.030030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.030210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.030225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.030470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.030486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.030741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.030756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.030910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.030925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.031179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.031194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.031471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.031486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.031641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.031655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.031808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.031823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.032050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.032066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-12-07 10:10:22.032161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-12-07 10:10:22.032176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.032416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.032432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.032704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.032719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.032941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.032962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.033220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.033235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.033473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.033488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.033733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.033747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.033998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.034014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.034250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.034265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.034416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.034431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.034674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.034690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.034927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.034942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.035107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.035123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.035399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.035415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.035665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.035680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.035917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.035932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.036112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.036127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.036262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.036289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.036442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.036457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.036561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.036576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.036814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.036830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.036982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.036998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.037118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.037134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.037296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.037312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.037486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.037501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.037687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.037703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.037800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.037815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.037991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.038010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.038267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.038284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.038468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.038482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.038654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.038676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.038793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.038808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.039029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.039047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.039279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.039295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.039482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.039497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.039610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.039626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.039809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.039826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-12-07 10:10:22.039954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-12-07 10:10:22.039970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.040071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.040084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.040367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.040383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.040547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.040562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.040725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.040741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.040829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.040845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.041067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.041083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.041281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.041296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.041437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.041452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.041629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.041643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.041828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.041843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.041996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.042013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.042237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.042252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.042492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.042507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.042733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.042748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.042962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.042978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.043159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.043176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.043464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.043481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.043636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.043651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.043871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.043885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.044018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.044041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.044233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.044250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.044444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.044458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.044793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.044840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.045126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.045161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.045435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.045466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.045688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.045718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.046033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.046069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.046342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.046373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.046645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.046659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.046878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.046895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.047078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.047093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.047240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.047255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.047514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.047546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.047854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.047896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.048106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.048139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.048427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.048461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-12-07 10:10:22.048741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-12-07 10:10:22.048774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.049002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.049041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.049304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.049339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.049662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.049694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.049886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.049919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.050210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.050247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.050526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.050560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.050789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.050825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.051089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.051124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.051331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.051364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.051578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.051619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.051821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.051857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.052060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.052097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.052361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.052394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.052579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.052599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.052864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.052879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.053044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.053061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.053302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.053317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.053575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.053593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.053711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.053727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.053990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.054024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.054302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.054334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.054558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.054573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.054756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.054788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.055079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.055113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.055416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.055447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.055657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.055690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.055962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.055995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.056212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.056227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.056403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.056418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.056602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-12-07 10:10:22.056616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-12-07 10:10:22.056901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.056932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.057133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.057166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.057368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.057399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.057597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.057629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.057886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.057919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.058070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.058105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.058299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.058338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.058631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.058664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.058963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.058997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.059221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.059253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.059553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.059568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.059785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.059800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.059972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.060006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.060220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.060253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.060371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.060402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.060647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.060662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.060824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.060839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.060961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.060994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.061190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.061222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.061416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.061449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.061635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.061650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.061846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.061879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.062091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.062124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.062268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.062299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.062628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.062642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.062746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.062761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.062941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.062987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.063294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.063326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.063552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.063568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.063746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.063778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.063985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.064019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.064216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.064248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.064367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.064396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.064555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.064569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.064742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.064758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.064945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.064965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.065154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-12-07 10:10:22.065186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-12-07 10:10:22.065459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.065492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.065712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.065759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.065978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.066012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.066207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.066239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.066441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.066455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.066626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.066642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.066817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.066832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.066929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.066943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.067051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.067066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.067245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.067260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.067484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.067520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.067707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.067744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.067932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.067981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.068185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.068218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.068354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.068370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.068561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.068595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.068853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.068885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.069086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.069130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.069362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.069377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.069571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.069613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.069758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.069792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.070011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.070064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.070397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.070437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.070632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.070651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.070774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.070789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.071030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.071046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.071140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.071155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.071343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.071376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.071671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.071704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.071977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.072010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.072305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.072338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.072542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.072557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.072801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.072834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.073042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.073076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.073279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.073311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.073564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.073596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.073787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.073820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-12-07 10:10:22.074020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-12-07 10:10:22.074055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.074312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.074342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.074540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.074554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.074725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.074740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.074986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.075020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.075226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.075259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.075510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.075544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.075771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.075786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.075911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.075942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.076109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.076143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.076418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.076451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.076694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.076710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.076884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.076899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.077183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.077258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.077525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.077564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.077796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.077811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.078033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.078049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.078265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.078280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.078436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.078451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.078702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.078736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.078937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.078986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.079279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.079320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.079565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.079600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.079872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.079909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.080199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.080234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.080502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.080518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.080676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.080696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.080962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.081000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.081136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.081155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.081325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.081363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.081637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.081670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.081864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.081899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.082176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.082213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.082482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.082517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.082750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.082783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.083009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.083045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.083319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.083336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.083586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-12-07 10:10:22.083621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-12-07 10:10:22.083772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.083806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.084109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.084145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.084374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.084408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.084631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.084646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.084736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.084752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.084926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.084942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.085148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.085164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.085351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.085365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.085553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.085585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.085782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.085816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.086000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.086034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.086156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.086187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.086376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.086392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.086634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.086651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.086803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.086818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.087013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.087051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.087172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.087205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.087420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.087454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.087640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.087655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.087810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.087826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.088056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.088071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.088247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.088262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.088509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.088541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.088670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.088701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.088994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.089010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.089179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.089193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.089409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.089425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.089672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.089686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.089926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.089944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.090095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.090128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.090284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.090316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.090565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.090580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.090794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.090809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.090927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.090942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.091138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.091153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.091305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.091320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.091473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.091488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-12-07 10:10:22.091717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-12-07 10:10:22.091748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.092004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.092037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.092265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.092281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.092396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.092410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.092491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.092506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.092728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.092761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.092970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.093003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.093130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.093162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.093378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.093393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.093553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.093585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.093732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.093764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.093972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.094006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.094140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.094172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.094385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.094417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.094568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.094583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.094748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.094781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.094971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.095005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.095197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.095228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.095400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.095470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.095615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.095651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.095885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.095919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.096078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.096112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.096312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.096343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.096532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.096547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.096767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.096799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.097023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.097057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.097313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.097329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.097590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.097621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.097813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.097845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.098061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.098093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.098314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.098329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.098441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.098485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.098654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.098686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-12-07 10:10:22.098877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-12-07 10:10:22.098908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.099105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.099138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.099223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.099239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.099401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.099415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.099573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.099605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.099904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.099935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.100067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.100100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.100346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.100378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.100608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.100640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.100822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.100854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.101121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.101155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.101438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.101471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.101674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.101712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.101936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.101957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.102111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.102126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.102376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.102407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.102606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.102637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.102840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.102875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.103085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.103117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.103248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.103278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.103407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.103439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.103643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.103657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.103824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.103838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.104040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.104072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.104262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.104293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.104506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.104536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.104714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.104729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.104912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.104943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.105211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.105243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.105455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.105487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.105639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.105653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.105899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.105929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.106068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.106099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.106370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.106402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.106609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.106623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.106850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.106882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.107154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.107187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.107326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.107340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-12-07 10:10:22.107452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-12-07 10:10:22.107466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.107646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.107664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.107761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.107791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.107959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.107992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.108195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.108227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.108435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.108449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.108664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.108696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.108897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.108927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.109146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.109190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.109408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.109421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.109530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.109544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.109646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.109661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.109848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.109862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.109959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.109975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.110127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.110140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.110389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.110404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.110571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.110602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.110736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.110766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.110983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.111016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.111208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.111239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.111505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.111537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.111826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.111858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.112005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.112037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.112158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.112188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.112384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.112416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.112549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.112587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.112772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.112785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.112963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.112996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.113198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.113230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.113444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.113484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.113593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.113607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.113777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.113808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.114024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.114057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.114322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.114354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.114606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.114637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.114753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.114784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.115000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.115033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.115266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-12-07 10:10:22.115297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-12-07 10:10:22.115502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.115533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.115725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.115757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.115958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.115990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.116203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.116235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.116466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.116499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.116630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.116661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.116780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.116812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.116942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.117001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.117254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.117285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.117419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.117433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.117518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.117532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.117698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.117730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.117914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.117945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.118180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.118212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.118399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.118414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.118536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.118567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.118761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.118792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.118977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.119009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.119221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.119235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.119428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.119460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.119594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.119625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.119826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.119857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.120007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.120040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.120245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.120276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.120425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.120457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.120669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.120684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.120779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.120793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.121055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.121070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.121221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.121252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.121446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.121477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.121793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.121824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.122136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.122176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.122443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.122474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.122773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.122805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.123081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.123113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.123398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.123412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.123578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.123592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-12-07 10:10:22.123818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-12-07 10:10:22.123850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.124121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.124153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.124353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.124384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.124660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.124674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.124843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.124874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.125078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.125111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.125386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.125419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.125702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.125733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.126078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.126111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.126388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.126420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.126705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.126737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.126876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.126907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.127146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.127180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.127378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.127409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.127661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.127675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.127921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.127963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.128167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.128199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.128417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.128431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.128657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.128687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.128884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.128915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.129069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.129101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.129291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.129311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.129483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.129515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.129690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.129721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.129985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.130018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.130215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.130246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.130469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.130500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.130686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.130700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.130892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.130923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.131130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.131162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.131375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.131407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.131535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.131549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.131712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.131726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.131921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.131935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.132110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.132124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.132289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.132321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.132476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.132508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.132641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-12-07 10:10:22.132673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-12-07 10:10:22.132869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.132901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.133119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.133152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.133285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.133316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.133535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.133566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.133680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.133712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.133851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.133882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.134136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.134169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.134372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.134404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.134590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.134604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.134784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.134816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.134971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.135010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.135199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.135230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.135502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.135517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.135683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.135698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.135830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.135861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.136145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.136178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.136328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.136359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.136637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.136667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.136783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.136815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.137015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.137031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.137193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.137223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.137380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.137412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.137618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.137649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.137902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.137934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.138198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.138270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.138423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.138460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.138721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.138736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.138981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.139015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.139150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.139183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.139336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.139367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.139646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.139678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.139922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.139966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-12-07 10:10:22.140116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-12-07 10:10:22.140148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.140331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.140363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.140582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.140615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.140820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.140852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.141104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.141136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.141301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.141320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.141593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.141625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.141727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.141758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.141970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.142003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.142144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.142177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.142374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.142405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.142713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.142745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.142894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.142927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.143133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.143167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.143372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.143404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.143688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.143702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.143866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.143881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.144066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.144100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.144379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.144411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.144635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.144667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.144857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.144889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.145155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.145170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.145282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.145297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.145473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.145504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.145702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.145734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.145938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.145985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.146117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.146149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.146346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.146378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.146514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.146546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.146765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.146780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.146845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.146876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.147077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.147111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.147324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.147357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.147624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.147639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.147808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.147841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.148043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.148076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.148217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.148250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.148384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-12-07 10:10:22.148398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-12-07 10:10:22.148503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.148518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.148674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.148705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.148853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.148885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.149149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.149183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.149329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.149362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.149567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.149598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.149804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.149818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.149986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.150024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.150233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.150265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.150446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.150478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.150702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.150734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.150931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.150973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.151175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.151208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.151409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.151441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.151714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.151746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.152033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.152066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.152374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.152406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.152661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.152693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.152994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.153027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.153253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.153285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.153493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.153525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.153734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.153766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.154040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.154073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.154278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.154311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.154520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.154552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.154822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.154836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.155097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.155112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.155343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.155358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.155593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.155607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.155797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.155829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.156019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.156053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.156253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.156285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.156536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.156568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.156828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.156861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.157144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.157178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.157459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.157491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-12-07 10:10:22.157690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-12-07 10:10:22.157722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.157926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.157967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.158249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.158281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.158482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.158497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.158672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.158704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.158905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.158936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.159207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.159241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.159384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.159416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.159644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.159676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.159871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.159904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.160121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.160154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.160405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.160444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.160704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.160735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.161032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.161047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.161210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.161224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.161404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.161436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.161684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.161716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.161925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.161962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.162235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.162268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.162478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.162492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.162751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.162765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.163013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.163029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.163191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.163205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.163388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.163403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.163615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.163648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.163776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.163807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.164061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.164095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.164393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.164426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.164695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.164727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.164975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.165007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.165229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.165270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.165510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.165524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.165806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.165837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.166140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.166173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.166373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.166412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.166628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.166643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.166861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.166876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-12-07 10:10:22.167058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-12-07 10:10:22.167072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.167255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.167270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.167504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.167535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.167825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.167858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.168137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.168152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.168356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.168389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.168643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.168675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.168994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.169027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.169316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.169349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.169603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.169635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.169908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.169940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.170331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.170364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.170648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.170694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.170888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.170919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.171197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.171243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.171448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.171481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.171670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.171701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.171916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.171931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.172172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.172188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.172432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.172446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.172691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.172724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.173000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.173034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.173299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.173331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.173565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.173597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.173866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.173880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.174099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.174114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.174303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.174317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.174558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.174572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.174835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.174867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.175073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.175106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.175316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.175349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.175609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.175623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.175828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.175843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.175944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-12-07 10:10:22.175962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-12-07 10:10:22.176183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.176215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.176497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.176528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.176730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.176762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.177073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.177107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.177249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.177282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.177547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.177561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.177788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.177803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.178025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.178040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.178236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.178250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.178493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.178507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.178806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.178838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.179022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.179055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.179258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.179290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.179559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.179590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.179803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.179817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.179987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.180019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.180143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.180177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.180382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.180413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.180685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.180699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.180968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.181012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.181238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.181277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.181513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.181544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.181721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.181736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.181989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.182023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.182253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.182285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.182423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.182459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.182627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.182642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.182808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.182822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.183017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.183049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.183306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.183337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.183553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.183566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.183760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.183774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.184014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.184028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.184195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.184209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.184449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.184463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.184698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.184712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-12-07 10:10:22.184823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-12-07 10:10:22.184837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.185022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.185038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.185271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.185303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.185504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.185536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.185801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.185834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.186102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.186136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.186441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.186474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.186737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.186770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.186913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.186945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.187166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.187199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.187407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.187439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.187793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.187874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.188147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.188186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.188416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.188451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.188780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.188831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.189055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.189101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.189332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.189365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.189560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.189576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.189820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.189851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.190061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.190095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.190398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.190434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.190680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.190712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.191019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.191051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.191198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.191230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.191512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.191546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.191771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.191786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.191959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.191993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.192228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.192260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.192516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.192532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.192756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.192789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.192980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.193012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.193220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.193253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.193509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.193542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.193843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.193875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.194147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.194181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.194389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.194422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.194618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.194650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-12-07 10:10:22.194874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-12-07 10:10:22.194889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.195089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.195122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.195343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.195375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.195542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.195575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.195727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.195743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.195985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.196017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.196180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.196214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.196406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.196437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.196657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.196690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.196957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.196972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.197144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.197159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.197389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.197421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.197689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.197722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.198033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.198066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.198275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.198314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.198508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.198523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.198721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.198755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.199029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.199064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.199270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.199305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.199531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.199545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.199644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.199687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.199899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.199932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.200087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.200120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.200343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.200375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.200583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.200599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.200866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.200898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.201096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.201130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.201343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.201376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.201652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.201685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.201825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.201856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.202126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.202159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.202441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.202476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.202654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.202667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.202851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.202883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.203092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.203125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.203258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.203289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.203440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.203454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-12-07 10:10:22.203612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-12-07 10:10:22.203644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.203841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.203872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.204136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.204172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.204441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.204475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.204783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.204815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.205027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.205061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.205222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.205254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.205449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.205482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.205783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.205828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.205980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.205999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.206124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.206155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.206428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.206461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.206589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.206621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.206871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.206887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.207049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.207065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.207251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.207282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.207495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.207528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.207816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.207854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.208154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.208188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.208456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.208488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.208712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.208745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.208883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.208898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.209004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.209019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.209199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.209220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.209466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.209499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.209731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.209762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.209917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.209961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.210237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.210269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.210471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.210504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.210812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.210844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.211120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.211153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.211402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.211436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.211599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.211615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.211866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.211897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.212121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.212155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.212443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.212478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.212756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.212771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.212932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.645 [2024-12-07 10:10:22.212951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.645 qpair failed and we were unable to recover it. 00:35:53.645 [2024-12-07 10:10:22.213139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.213156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.213359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.213392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.213707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.213740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.213959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.213975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.214181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.214197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.214410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.214425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.214657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.214690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.214896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.214927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.215157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.215190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.215399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.215432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1472032 Killed "${NVMF_APP[@]}" "$@" 00:35:53.646 [2024-12-07 10:10:22.215709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.215724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.215885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.215900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.216146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.216163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.216396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.216410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:53.646 [2024-12-07 10:10:22.216639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.216655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:53.646 [2024-12-07 10:10:22.216832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.216847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.217080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.217097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:53.646 [2024-12-07 10:10:22.217220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.217240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:53.646 [2024-12-07 10:10:22.217535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.217551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.646 [2024-12-07 10:10:22.217719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.217734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.217910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.217925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.218136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.218152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.218342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.218357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.218583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.218598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.218781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.218796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.219046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.219062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.219327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.219343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.219587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.219602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.219825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.219840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.220045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.220061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.220333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.220348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.646 qpair failed and we were unable to recover it. 00:35:53.646 [2024-12-07 10:10:22.220454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.646 [2024-12-07 10:10:22.220470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.220720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.220735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.220977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.220992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.221155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.221170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.221292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.221306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.221468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.221483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.221718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.221733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.221968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.221984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.222171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.222188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.222431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.222445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.222639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.222654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.222835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.222850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.223018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.223037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.223278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.223294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.223471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.223485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.223710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.223725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.223944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.223963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.224195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.224211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.224386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.224402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=1472750 00:35:53.647 [2024-12-07 10:10:22.224659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.224674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 1472750 00:35:53.647 [2024-12-07 10:10:22.224846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.224862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:53.647 [2024-12-07 10:10:22.225027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.225043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.225146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.225160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1472750 ']' 00:35:53.647 [2024-12-07 10:10:22.225269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.225285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.225395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.225409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.647 [2024-12-07 10:10:22.225563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.225578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.225750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.225766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:53.647 [2024-12-07 10:10:22.225980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.225995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.226095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.226111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.647 [2024-12-07 10:10:22.226230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.226248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:53.647 [2024-12-07 10:10:22.226422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.226438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.647 [2024-12-07 10:10:22.226686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.226703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.226971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.647 [2024-12-07 10:10:22.226987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.647 qpair failed and we were unable to recover it. 00:35:53.647 [2024-12-07 10:10:22.227180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.227195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.227418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.227437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.227603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.227618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.227716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.227732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.227885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.227900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.228124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.228139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.228390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.228405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.228535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.228552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.228735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.228749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.228984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.229000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.229174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.229189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.229309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.229323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.229478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.229492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.229685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.229700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.229802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.229818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.229985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.230001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.230162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.230177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.230359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.230374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.230593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.230607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.230901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.230915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.231158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.231175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.231335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.231350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.231598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.231613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.231798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.231814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.231997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.232013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.232270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.232287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.232466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.232482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.232647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.232662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.232781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.232797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.232960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.232975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.233168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.233183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.233355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.233370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.233620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.233635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.233798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.233813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.234002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.234019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.234211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.234226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.234391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.234406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.648 [2024-12-07 10:10:22.234515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.648 [2024-12-07 10:10:22.234530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.648 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.234707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.234722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.234972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.234988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.235225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.235239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.235417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.235435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.235552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.235568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.235864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.235878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.236133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.236149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.236318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.236334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.236581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.236595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.236767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.236781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.236867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.236881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.237100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.237115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.237296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.237311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.237569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.237584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.237767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.237783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.237868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.237883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.238040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.238055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.238228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.238243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.238434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.238449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.238677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.238692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.238887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.238902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.239074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.239089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.239258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.239274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.239520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.239534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.239656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.239670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.239850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.239865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.240094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.240110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.240294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.240309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.240489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.240505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.240737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.240751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.241018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.241033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.241219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.241234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.241460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.241475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.241677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.241691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.241934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.241954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.242184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.242199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.649 [2024-12-07 10:10:22.242370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.649 [2024-12-07 10:10:22.242385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.649 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.242541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.242555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.242727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.242742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.243017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.243032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.243268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.243282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.243471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.243487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.243738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.243754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.243986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.244005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.244252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.244267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.244438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.244454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.244644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.244659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.244851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.244882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.245174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.245189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.245347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.245362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.245562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.245577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.245736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.245752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.245986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.246001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.246221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.246235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.246402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.246418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.246590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.246604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.246782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.246796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.247049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.247065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.247285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.247301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.247390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.247404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.247644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.247658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.247823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.247838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.247936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.247957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.248113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.248129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.248230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.248245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.248491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.248506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.248704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.248718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.248830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.248846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.248932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.248953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.249110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.249126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.249297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.249312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.249597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.249612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.249857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.249873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.650 qpair failed and we were unable to recover it. 00:35:53.650 [2024-12-07 10:10:22.250030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.650 [2024-12-07 10:10:22.250046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.250242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.250257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.250415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.250431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.250582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.250597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.250756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.250771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.250930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.250945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.251221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.251236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.251390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.251404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.251541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.251556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.251819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.251835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.252105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.252123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.252298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.252315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.252565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.252580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.252826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.252841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.253004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.253020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.253205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.253220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.253334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.253349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.253575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.253590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.253809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.253824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.253995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.254010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.254260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.254276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.254511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.254525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.254791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.254806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.254929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.254944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.255189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.255204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.255367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.255382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.255626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.255642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.255757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.255772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.256009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.256024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.256199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.256213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.256461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.256475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.256693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.256709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.256957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.256973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.257213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.257228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.257415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.651 [2024-12-07 10:10:22.257429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.651 qpair failed and we were unable to recover it. 00:35:53.651 [2024-12-07 10:10:22.257600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.257615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.257764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.257779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.258043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.258058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.258237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.258252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.258532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.258547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.258765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.258780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.258879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.258893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.259063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.259078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.259307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.259323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.259503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.259518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.259744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.259759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.260012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.260029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.260180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.260195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.260364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.260379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.260531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.260546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.260774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.260794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.260997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.261012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.261294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.261309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.261474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.261489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.261663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.261679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.261921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.261936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.262191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.262208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.262440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.262455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.262700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.262714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.262962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.262979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.263241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.263257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.263501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.263516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.263673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.263688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.263899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.263914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.264185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.264201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.264423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.264438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.264591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.264605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.264700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.264715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.264820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.264836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.265009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.265024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.265216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.265232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.265403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.652 [2024-12-07 10:10:22.265418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.652 qpair failed and we were unable to recover it. 00:35:53.652 [2024-12-07 10:10:22.265654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.265668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.265935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.265954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.266127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.266143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.266307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.266334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.266529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.266544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.266709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.266724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.266886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.266902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.267058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.267073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.267237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.267252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.267399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.267414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.267678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.267693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.267895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.267911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.268090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.268105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.268272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.268288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.268458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.268472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.268643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.268658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.268844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.268858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.269034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.269049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.269278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.269296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.269392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.269407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.269567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.269583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.269750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.269765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.269930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.269945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.270063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.270078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.270244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.270259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.270448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.270462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.270660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.270675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.270870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.270884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.271041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.271056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.271298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.271313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.271483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.271497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.271713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.271727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.271893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.271908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.272154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.272170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.272414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.272429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.272544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.272560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.272724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.272739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.653 [2024-12-07 10:10:22.272972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.653 [2024-12-07 10:10:22.272987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.653 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.273215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.273230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.273492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.273507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.273646] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:53.654 [2024-12-07 10:10:22.273672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.273689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 [2024-12-07 10:10:22.273691] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.273859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.273875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.274049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.274065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.274218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.274233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.274403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.274418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.274596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.274611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.274797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.274813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.275062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.275076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.275318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.275332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.275503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.275518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.275671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.275686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.275956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.275972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.276140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.276155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.276412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.276427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.276675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.276690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.276802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.276816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.277039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.277055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.277169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.277184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.277401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.277415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.277513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.277528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.277761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.277777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.277873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.277888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.278063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.278077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.278271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.278285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.278470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.278485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.278729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.278743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.278963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.278979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.279243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.279259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.279478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.279493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.279710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.279725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.280008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.280026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.280180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.280195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.280382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.280397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.280551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.654 [2024-12-07 10:10:22.280565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.654 qpair failed and we were unable to recover it. 00:35:53.654 [2024-12-07 10:10:22.280807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.280822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.281077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.281092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.281322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.281338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.281504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.281518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.281737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.281751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.281927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.281941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.282111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.282126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.282255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.282270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.282488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.282503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.282671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.282686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.282918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.282933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.283147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.283183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.283467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.283482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.283656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.283672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.283911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.283926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.284097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.284111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.284265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.284280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.284395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.284409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.284635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.284650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.284820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.284836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.285120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.285136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.285300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.285314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.285506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.285520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.285759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.285777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.285896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.285911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.286016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.286033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.286152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.286165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.286407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.286422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.286607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.286621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.286861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.286876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.287039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.287055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.287319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.287334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.287497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.287511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.655 [2024-12-07 10:10:22.287753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.655 [2024-12-07 10:10:22.287767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.655 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.288022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.288038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.288268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.288285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.288453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.288468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.288693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.288709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.288869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.288884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.289057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.289073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.289233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.289248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.289494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.289508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.289761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.289775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.290004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.290019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.290116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.290132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.290311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.290326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.290494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.290510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.290728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.290743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.290954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.290969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.291138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.291153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.291269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.291283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.291523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.291539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.291731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.291745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.292008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.292024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.292241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.292255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.292425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.292440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.292608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.292622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.292838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.292853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.292965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.292982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.293180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.293195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.293425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.293440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.293686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.293701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.293807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.293821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.294064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.294085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.294327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.294341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.294525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.294539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.294754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.294768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.294952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.294968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.295209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.295223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.295413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.656 [2024-12-07 10:10:22.295427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.656 qpair failed and we were unable to recover it. 00:35:53.656 [2024-12-07 10:10:22.295616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.295631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.295804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.295818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.296103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.296118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.296270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.296285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.296524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.296539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.296635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.296650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.296869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.296884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.297095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.297110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.297273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.297289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.297462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.297477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.297694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.297708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.297805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.297820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.298038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.298054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.298290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.298304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.298476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.298491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.298601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.298617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.298789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.298804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.298958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.298972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.299204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.299218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.299326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.299341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.299591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.299606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.299844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.299859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.300024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.300040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.300151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.300165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.300285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.300300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.300481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.300495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.300686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.300701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.300919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.300934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.301125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.301147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.301322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.301337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.301590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.301604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.301715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.301730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.301913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.301928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.302107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.302127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.302345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.302360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.302464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.302479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.302677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.302693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.657 [2024-12-07 10:10:22.302910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.657 [2024-12-07 10:10:22.302925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.657 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.303140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.303155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.303264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.303279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.303522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.303536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.303713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.303728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.303967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.303984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.304134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.304150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.304236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.304251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.304439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.304455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.304686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.304701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.304868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.304883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.304985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.305001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.305228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.305242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.305462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.305478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.305669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.305684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.305833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.305847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.306001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.306016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.306176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.306191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.306306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.306321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.306481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.306496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.306678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.306693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.306912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.306928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.307033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.307048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.307314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.307329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.307483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.307498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.307597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.307611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.307857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.307872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.308160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.308175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.308341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.308355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.308477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.308492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.308709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.308724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.308976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.308991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.309149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.309163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.309276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.309291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.309447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.309462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.309680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.309695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.309849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.309867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.310035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.658 [2024-12-07 10:10:22.310051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.658 qpair failed and we were unable to recover it. 00:35:53.658 [2024-12-07 10:10:22.310245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.310260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.310480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.310495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.310642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.310656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.310822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.310837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.311003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.311019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.311120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.311134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.311383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.311398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.311498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.311512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.311662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.311676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.311901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.311915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.312041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.312056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.312149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.312164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.312407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.312422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.312640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.312655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.312897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.312912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.313086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.313101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.313216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.313231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.313334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.313349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.313568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.313584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.313835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.313849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.314076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.314091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.314252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.314268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.314469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.314483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.314728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.314743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.315012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.315027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.315295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.315310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.315531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.315546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.315650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.315665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.315752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.315768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.315938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.315963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.316166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.316180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.316370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.316385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.316550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.316565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.316799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.316813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.316940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.316962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.317116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.317131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.317287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.317303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.659 qpair failed and we were unable to recover it. 00:35:53.659 [2024-12-07 10:10:22.317549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.659 [2024-12-07 10:10:22.317564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.317819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.317839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.318019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.318034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.318218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.318233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.318328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.318342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.318496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.318511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.318699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.318713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.318886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.318901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.319124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.319139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.319300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.319314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.319494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.319508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.319694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.319709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.319856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.319870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.319962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.319978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.320149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.320163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.320363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.320377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.320627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.320642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.320817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.320832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.320922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.320937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.321129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.321145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.321298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.321312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.321589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.321603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.321770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.321785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.322064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.322081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.322185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.322203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.322367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.322389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.322631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.322648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.322869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.322884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.323042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.323058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.323170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.323184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.323337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.323351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.323519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.323534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.323789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.323804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.324027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.660 [2024-12-07 10:10:22.324044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.660 qpair failed and we were unable to recover it. 00:35:53.660 [2024-12-07 10:10:22.324266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.661 [2024-12-07 10:10:22.324281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.661 qpair failed and we were unable to recover it. 00:35:53.661 [2024-12-07 10:10:22.324454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.661 [2024-12-07 10:10:22.324480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.661 qpair failed and we were unable to recover it. 00:35:53.661 [2024-12-07 10:10:22.324728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.661 [2024-12-07 10:10:22.324743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.661 qpair failed and we were unable to recover it. 00:35:53.661 [2024-12-07 10:10:22.324910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.661 [2024-12-07 10:10:22.324925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.661 qpair failed and we were unable to recover it. 00:35:53.661 [2024-12-07 10:10:22.325034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.661 [2024-12-07 10:10:22.325050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.661 qpair failed and we were unable to recover it. 00:35:53.661 [2024-12-07 10:10:22.325209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.661 [2024-12-07 10:10:22.325224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.661 qpair failed and we were unable to recover it. 00:35:53.661 [2024-12-07 10:10:22.325325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.661 [2024-12-07 10:10:22.325339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.661 qpair failed and we were unable to recover it. 00:35:53.661 [2024-12-07 10:10:22.325501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.661 [2024-12-07 10:10:22.325519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.661 qpair failed and we were unable to recover it. 00:35:53.661 [2024-12-07 10:10:22.325781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.661 [2024-12-07 10:10:22.325797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.661 qpair failed and we were unable to recover it. 00:35:53.661 [2024-12-07 10:10:22.325962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.661 [2024-12-07 10:10:22.325977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.661 qpair failed and we were unable to recover it. 00:35:53.661 [2024-12-07 10:10:22.326175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.661 [2024-12-07 10:10:22.326196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.661 qpair failed and we were unable to recover it. 00:35:53.661 [2024-12-07 10:10:22.326288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.661 [2024-12-07 10:10:22.326303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.661 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.326465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.326479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.326722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.326737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.326891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.326905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.327039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.327055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.327210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.327224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.327381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.327396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.327549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.327563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.327807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.327822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.327975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.327990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.328172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.328187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.328351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.328368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.328473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.328488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.328684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.328699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-12-07 10:10:22.328819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-12-07 10:10:22.328834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.329070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.329085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.329190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.329205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.329427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.329443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.329626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.329641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.329824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.329840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.330088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.330103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.330345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.330360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.330596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.330612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.330859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.330875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.331095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.331110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.331282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.331296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.331480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.331495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.331735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.331750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.331900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.331915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.332027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.332043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.332320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.332334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.332553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.332568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.332733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.332748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.332927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.332941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.333142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.333158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.333398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.333413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.333665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.333684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.333851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.333866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.334062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.334079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.334166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.334180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.334349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.334364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.334603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.334618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.334715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.334730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.334986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.335003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.335172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.335187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.335370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.335385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.335597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.335612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.335842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.335857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.336097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.336112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.336354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.336369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.336594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.336610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.336859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-12-07 10:10:22.336874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-12-07 10:10:22.337042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.337057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.337251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.337266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.337426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.337440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.337683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.337698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.337885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.337900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.338129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.338145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.338395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.338410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.338631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.338646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.338850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.338864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.339015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.339031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.339203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.339218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.339393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.339408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.339595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.339610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.339829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.339844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.340063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.340078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.340193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.340208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.340465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.340479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.340568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.340583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.340740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.340756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.340854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.340868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.341050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.341065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.341257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.341272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.341422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.341437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.341607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.341622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.341775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.341793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.341964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.341980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.342224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.342238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.342414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.342428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.342582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.342596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.342836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.342851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.343089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.343104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.343325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.343340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.343494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.343510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.343719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.343735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.343927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.343941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.344123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.344139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.344339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.344354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-12-07 10:10:22.344581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-12-07 10:10:22.344596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.344751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.344766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.345011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.345026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.345206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.345221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.345482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.345496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.345763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.345777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.345969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.345986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.346173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.346188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.346391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.346405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.346628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.346643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.346886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.346901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.347053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.347068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.347222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.347237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.347481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.347496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.347691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.347728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.347966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.347983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.348175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.348192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.348431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.348447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.348464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:53.939 [2024-12-07 10:10:22.348622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.348638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.348813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.348829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.349078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.349096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.349333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.349348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.349527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.349542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.349723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.349738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.349897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.349912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.350161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.350178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.350351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.350367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.350609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.350629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.350871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.350886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.351122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.351139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.351380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.351396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.351612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.351628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.351797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.351812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.351988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.352005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.352125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.352140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.352357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.352372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.352619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.352633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-12-07 10:10:22.352805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-12-07 10:10:22.352821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.352987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.353003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.353118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.353134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.353377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.353393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.353687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.353703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.353920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.353935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.354159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.354175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.354334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.354350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.354576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.354592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.354832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.354847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.355021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.355038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.355253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.355269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.355493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.355509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.355776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.355796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.356038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.356054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.356274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.356290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.356559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.356576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.356681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.356699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.356857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.356873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.356972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.356988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.357230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.357246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.357417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.357434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.357676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.357696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.357866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.357883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.358042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.358059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.358306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.358325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.358514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.358532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.358687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.358702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.358929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.358944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.359149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.359165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.359289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.359305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.359418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.359435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.359639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.359654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.359816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.359831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.360055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.360074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.360175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.360190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.360412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.360429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.360582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.360598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-12-07 10:10:22.360752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-12-07 10:10:22.360766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.360962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.360977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.361142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.361158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.361314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.361329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.361496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.361513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.361694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.361708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.361913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.361930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.362220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.362237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.362425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.362441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.362555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.362570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.362672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.362688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.362933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.362953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.363171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.363186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.363373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.363387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.363496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.363511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.363594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.363609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.363849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.363864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.363968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.363984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.364262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.364279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.364447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.364463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.364639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.364656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.364897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.364914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.365032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.365049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.365200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.365215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.365432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.365448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.365671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.365687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.365838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.365854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.366027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.366042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.366309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.366324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.366547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.366562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.366747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.366763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.366954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.366970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.367238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.367254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-12-07 10:10:22.367418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-12-07 10:10:22.367433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.367728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.367745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.367946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.367970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.368189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.368205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.368364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.368381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.368556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.368574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.368840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.368856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.369115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.369133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.369305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.369322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.369489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.369506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.369725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.369740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.369965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.369981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.370257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.370273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.370458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.370474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.370626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.370646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.370888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.370906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.371124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.371140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.371387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.371404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.371623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.371640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.371886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.371902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.372026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.372042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.372248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.372263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.372512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.372527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.372781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.372797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.372964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.372980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.373152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.373167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.373335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.373351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.373567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.373582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.373698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.373713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.373916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.373931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.374094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.374109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.374267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.374281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.374550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.374565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.374755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.374769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.374941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.374962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.375062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.375077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.375246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.375261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.375449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.375464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-12-07 10:10:22.375662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-12-07 10:10:22.375678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.375846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.375861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.376100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.376116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.376310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.376330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.376518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.376534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.376699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.376716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.376966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.376981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.377158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.377172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.377413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.377428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.377594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.377608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.377825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.377840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.378100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.378116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.378311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.378325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.378594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.378608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.378872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.378887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.379045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.379060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.379277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.379291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.379409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.379424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.379610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.379624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.379866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.379881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.380035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.380049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.380225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.380239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.380500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.380514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.380778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.380792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.381039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.381055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.381169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.381184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.381420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.381434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.381675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.381690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.381839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.381853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.382081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.382097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.382260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.382282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.382500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.382514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.382730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.382745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.383007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.383022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.383238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.383252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.383519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.383534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.383728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.383743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.383921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-12-07 10:10:22.383936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-12-07 10:10:22.384056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.384071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.384240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.384255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.384492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.384506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.384748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.384762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.384983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.384999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.385110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.385125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.385335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.385371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.385556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.385572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.385761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.385777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.386040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.386057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.386237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.386251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.386402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.386416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.386660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.386680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.386871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.386888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.387043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.387060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.387238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.387254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.387428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.387444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.387661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.387677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.387835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.387850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.388100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.388121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.388306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.388320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.388566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.388583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.388848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.388865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.389065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.389084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.389304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.389320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.389558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.389574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.389794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.389810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.389964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.389980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.390134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.390148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.390283] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:53.944 [2024-12-07 10:10:22.390312] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:53.944 [2024-12-07 10:10:22.390321] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:53.944 [2024-12-07 10:10:22.390327] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:53.944 [2024-12-07 10:10:22.390332] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:53.944 [2024-12-07 10:10:22.390382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.390396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.390445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:35:53.944 [2024-12-07 10:10:22.390626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.390553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:35:53.944 [2024-12-07 10:10:22.390645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 [2024-12-07 10:10:22.390658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.390659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:35:53.944 [2024-12-07 10:10:22.390825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.390840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.391076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.391093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-12-07 10:10:22.391315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-12-07 10:10:22.391329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.391498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.391513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.391636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.391650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.391743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.391757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.392000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.392016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.392183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.392198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.392440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.392455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.392564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.392578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.392823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.392838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.393072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.393088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.393305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.393321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.393594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.393609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.393800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.393815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.394086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.394101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.394322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.394336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.394505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.394520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.394734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.394748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.394915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.394930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.395101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.395117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.395359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.395374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.395593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.395608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.395836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.395851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.396029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.396044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.396215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.396233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.396411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.396426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.396668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.396684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.396853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.396868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.397021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.397036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.397295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.397309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.397573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.397588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.397740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.397754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.397995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.398010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.398162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.398177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-12-07 10:10:22.398391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-12-07 10:10:22.398406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.398635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.398650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.398918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.398934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.399203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.399229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.399394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.399409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.399592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.399607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.399838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.399853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.400031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.400046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.400319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.400334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.400564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.400579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.400814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.400829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.401078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.401094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.401352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.401367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.401587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.401602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.401851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.401865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.402056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.402071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.402188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.402203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.402390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.402405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.402645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.402660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.402925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.402941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.403168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.403184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.403428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.403443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.403625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.403641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.403886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.403903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.404139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.404155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.404349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.404365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.404625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.404642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.404864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.404880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.405099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.405115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.405340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.405356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.405519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.405534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.405732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.405747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.406001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.406017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.406243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.406259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.406370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.406384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.406602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.406618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.406836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.406851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-12-07 10:10:22.406970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-12-07 10:10:22.406985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.407246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.407261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.407481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.407496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.407736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.407752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.407994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.408011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.408170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.408184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.408422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.408437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.408667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.408686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.408908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.408923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.409077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.409093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.409311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.409325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.409561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.409576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.409819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.409834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.409989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.410005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.410156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.410171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.410385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.410400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.410670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.410685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.410928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.410943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.411166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.411180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.411425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.411441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.411610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.411625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.411792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.411808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.412053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.412069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.412286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.412300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.412589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.412605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.412851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.412868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.413114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.413130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.413370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.413384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.413534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.413549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.413792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.413808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.414073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.414090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.414193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.414208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.414383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.414398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.414638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.414653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.414842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.414861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.415046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.415062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.415312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.415328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.415506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-12-07 10:10:22.415521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-12-07 10:10:22.415766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.415781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.416029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.416044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.416209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.416224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.416473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.416488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.416731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.416745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.416914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.416928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.417150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.417165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.417380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.417396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.417635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.417650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.417867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.417882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.418133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.418148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.418299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.418314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.418475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.418490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.418753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.418769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.418936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.418966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.419158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.419174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.419451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.419466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.419713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.419729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.419901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.419916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.420076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.420093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.420340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.420355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.420542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.420557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.420846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.420861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.421083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.421098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.421340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.421356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.421572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.421586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.421806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.421821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.422114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.422129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.422284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.422299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.422548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.422564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.422679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.422694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.422854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.422869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.423091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.423108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.423214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.423228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.423473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.423488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.423668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.423684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.423852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.423867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-12-07 10:10:22.424148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-12-07 10:10:22.424193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.424301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.424317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.424556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.424570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.424767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.424782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.424974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.424989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.425164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.425178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.425414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.425429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.425609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.425623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.425864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.425879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.426120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.426135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.426299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.426313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.426473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.426487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.426592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.426607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.426850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.426870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.426966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.426981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.427159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.427173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.427401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.427415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.427600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.427614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.427711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.427725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.427951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.427966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.428115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.428129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.428318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.428332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.428494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.428508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.428749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.428763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.429003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.429018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.429236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.429251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.429353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.429367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.429538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.429553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.429719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.429734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.429980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.429995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.430197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.430212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.430377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.430391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.430575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.430590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.430773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.430787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.431006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.431021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.431288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.431302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.431482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.431496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.431665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.431680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-12-07 10:10:22.431926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-12-07 10:10:22.431941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.432201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.432217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.432492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.432512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.432765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.432781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.433006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.433023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.433273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.433292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.433541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.433561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.433738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.433757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.433943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.433964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.434149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.434166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.434414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.434433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.434733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.434750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.434861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.434876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.435033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.435049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.435281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.435296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.435524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.435539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.435692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.435708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.435924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.435940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.436133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.436148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.436388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.436402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.436592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.436607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.436849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.436864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.437108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.437123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.437292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.437307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.437524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.437539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.437814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.437829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.437989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.438004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.438246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.438261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.438446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.438460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.438676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.438695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.438952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.438967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.439219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.439234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.439466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.439481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.439686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.439700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.439967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-12-07 10:10:22.439982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-12-07 10:10:22.440175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.440190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.440435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.440450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.440621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.440637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.440852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.440867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.441110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.441126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.441236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.441250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.441401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.441416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.441676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.441692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.441873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.441888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.442172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.442187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.442293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.442308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.442533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.442547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.442721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.442735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.442966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.442982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.443143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.443158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.443324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.443339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.443450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.443465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.443680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.443695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.443888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.443903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.444065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.444081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.444320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.444336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.444574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.444592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.444840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.444855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.445096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.445111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.445352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.445368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.445613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.445628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.445822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.445836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.446016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.446031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.446288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.446304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.446470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.446486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.446640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.446655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.446920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.446935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.447216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.447233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.447451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.447467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.447652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.447667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.447915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.447930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.448189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.448204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-12-07 10:10:22.448434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-12-07 10:10:22.448448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.448710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.448725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.448838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.448853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.449091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.449106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.449347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.449362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.449526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.449540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.449725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.449739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.449982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.449997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.450187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.450201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.450415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.450429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.450599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.450614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.450810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.450828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.451012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.451027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.451132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.451147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.451387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.451402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.451571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.451586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.451823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.451837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.452075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.452090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.452345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.452359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.452531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.452545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.452784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.452799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.453034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.453049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.453304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.453318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.453481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.453496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.453713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.453727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.453902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.453917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.454149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.454164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.454390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.454405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.454668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.454682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.454928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.454943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.455120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.455135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.455378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.455392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.455556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.455570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.455844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.455859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.456147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.456162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.456265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.456279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.456519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.456533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.456777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.456792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-12-07 10:10:22.456980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-12-07 10:10:22.456995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.457178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.457193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.457457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.457472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.457620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.457634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.457875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.457890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.458161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.458176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.458441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.458455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.458621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.458635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.458799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.458813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.458981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.458996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.459198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.459212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.459358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.459372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.459589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.459603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.459764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.459779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2159010 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.459905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.459938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.460170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.460185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.460401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.460415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.460696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.460711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.460831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.460845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.461059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.461074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.461266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.461280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.461469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.461483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.461734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.461748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.461935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.461954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.462246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.462261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.462438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.462452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.462696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.462710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.462884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.462898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.463064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.463080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.463239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.463254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.463511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.463525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.463717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.463731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.463952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.463967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.464203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.464217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.464378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.464393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.464627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.464641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.464859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.464873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.465112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-12-07 10:10:22.465128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-12-07 10:10:22.465353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.465367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.465608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.465623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.465788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.465802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.465993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.466008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.466247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.466261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.466503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.466518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.466760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.466774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.467015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.467030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.467217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.467232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.467474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.467488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.467731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.467745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.467917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.467931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.468179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.468194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.468410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.468424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.468640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.468654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.468825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.468840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.469029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.469047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.469166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.469180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.469367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.469382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.469646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.469660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.469920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.469934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.470162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.470176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.470417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.470432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.470592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.470606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.470727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.470740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.470976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.470990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.471231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.471245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.471473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.471487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.471655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.471669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.471909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.471923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.472170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.472185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.472361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.472376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.472565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.472580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.472825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.472839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.473074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.473088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.473334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.473348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-12-07 10:10:22.473505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-12-07 10:10:22.473519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.473803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.473816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.474039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.474053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.474281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.474296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.474572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.474586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.474824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.474838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.474956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.474971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.475084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.475098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.475313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.475327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.475485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.475498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.475719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.475733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.475910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.475924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.476087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.476101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.476375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.476389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.476628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.476642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.476876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.476890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.477107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.477122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.477290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.477304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.477413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.477427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.477665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.477679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.477898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.477915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.478077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.478091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.478325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.478340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.478506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.478521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.478740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.478755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.478996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.479011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.479299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.479313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.479497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.479511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.479773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.479788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.480022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.480037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.480215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.480229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.480336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.480350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.480587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.480601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-12-07 10:10:22.480784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-12-07 10:10:22.480798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.481016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.481032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.481195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.481210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.481371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.481385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.481650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.481664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.481831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.481845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.482006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.482020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.482134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.482147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.482386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.482400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.482552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.482566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.482794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.482808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.482973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.482988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.483216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.483230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.483445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.483459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.483647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.483662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.483900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.483914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.484081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.484096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.484334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.484349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.484512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.484525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.484671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.484685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.484846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.484860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.485052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.485067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.485179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.485194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.485380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.485394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.485633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.485647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.485888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.485902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.486081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.486096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.486248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.486268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.486435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.486449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.486617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.486631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.486854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.486869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.487137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.487152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.487417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.487431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.487583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.487597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.487762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.487777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.488018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.488033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.488209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.488223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.488462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-12-07 10:10:22.488476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-12-07 10:10:22.488638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.488651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.488892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.488905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.489070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.489084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.489266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.489280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.489447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.489461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.489696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.489710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.489925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.489939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.490189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.490203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.490437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.490452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.490697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.490711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.490876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.490890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.491128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.491143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.491305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.491319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.491577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.491592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:53.957 [2024-12-07 10:10:22.491764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.491778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.491928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.491942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:53.957 [2024-12-07 10:10:22.492120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.492135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.492318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.492333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:53.957 [2024-12-07 10:10:22.492511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.492525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:53.957 [2024-12-07 10:10:22.492703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.492718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.957 [2024-12-07 10:10:22.492960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.492975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.493163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.493177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.493396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.493410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.493569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.493583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.493782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.493797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.494036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.494052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.494294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.494308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.494475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.494492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.494768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.494783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.495013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.495028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.495267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.495280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.495427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.495440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.495684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.495699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.495963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.495978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-12-07 10:10:22.496197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-12-07 10:10:22.496211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.496324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.496338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.496503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.496518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.496760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.496774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.496965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.496981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.497137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.497152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.497301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.497315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.497498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.497512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.497660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.497674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.497840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.497854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.498116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.498130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.498295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.498309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.498479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.498493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.498745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.498759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.498992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.499008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.499157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.499171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.499329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.499343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.499556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.499570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.499679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.499694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.499957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.499972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.500215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.500229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.500345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.500360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.500610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.500625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.500852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.500866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.501087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.501102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.501265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.501279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.501523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.501538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.501702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.501716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.501999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.502013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.502167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.502181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.502332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.502346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.502451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.502465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.502593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.502607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.502788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.502804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.502982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.502999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.503255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.503269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.503532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.503546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.503779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-12-07 10:10:22.503793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-12-07 10:10:22.504035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.504049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.504208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.504222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.504335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.504350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.504510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.504523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.504743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.504758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.504868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.504883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.505108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.505123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.505280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.505294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.505466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.505481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.505666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.505680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.505803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.505817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.505968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.505983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.506147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.506162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.506322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.506336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.506507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.506521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.506622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.506635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.506803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.506818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.506904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.506918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.507104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.507119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.507209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.507223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.507331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.507346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.507510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.507524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.507790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.507804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.507906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.507919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.508168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.508183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.508355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.508369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.508528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.508542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.508637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.508651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.508841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.508856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.509028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.509043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.509203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.509216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.509381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.509396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.509546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.509561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-12-07 10:10:22.509803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-12-07 10:10:22.509817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.509944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.509962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.510124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.510141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.510344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.510358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.510481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.510495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.510680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.510694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.510874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.510888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.511094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.511109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.511224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.511238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.511397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.511413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.511585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.511599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.511866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.511881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.512041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.512056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.512174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.512189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.512312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.512326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.512441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.512455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.512566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.512581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.512801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.512815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.513038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.513053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.513169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.513183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.513268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.513282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.513464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.513478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.513646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.513660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.513823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.513838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.513931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.513945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.514096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.514111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.514271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.514286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.514401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.514416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.514577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.514592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.514708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.514722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.514836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.514850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.515078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.515094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.515185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.515199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.515371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.515385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.515545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.515559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.515846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.515860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.516105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.516119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.516293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-12-07 10:10:22.516307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-12-07 10:10:22.516521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.516536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.516775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.516790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.516982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.516997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.517155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.517170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.517352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.517369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.517533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.517547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.517707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.517721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.517882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.517896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.518118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.518133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.518347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.518361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.518467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.518481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.518586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.518600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.518851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.518865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.518968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.518983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.519082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.519097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.519204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.519219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.519450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.519465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.519658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.519672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.519915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.519929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.520059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.520073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.520303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.520317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.520533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.520549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.520740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.520755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.521011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.521026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.521138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.521152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.521258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.521272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.521375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.521390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.521485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.521498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.521697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.521711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.521929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.521943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.522049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.522062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.522337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.522351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.522519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.522533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.522822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.522836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.523004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.523018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.523210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.523224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.523378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.523392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-12-07 10:10:22.523493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-12-07 10:10:22.523507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.523675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.523689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.523905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.523919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.524130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.524145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.524314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.524329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.524478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.524492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.524733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.524747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.524938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.524965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.525086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.525100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.525276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.525289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.525511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.525525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.525704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.525718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.525938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.525958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.526171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.526186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.526411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.526425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.526539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.526553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.526799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.526813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.527081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.527096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.527256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.527270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.527458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.527472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.527589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.527603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.527719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.527733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.527926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.527940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.528055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.528069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.528250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.528265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.528361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.528375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.528465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.528478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.528775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.528790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.529007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.529021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.529184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.529198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.529364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.529378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:53.962 [2024-12-07 10:10:22.529612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.529628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.529793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.529807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.529921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.529938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:53.962 [2024-12-07 10:10:22.530047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.530063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-07 10:10:22.530234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.530248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.962 [2024-12-07 10:10:22.530393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-12-07 10:10:22.530409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.530579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.530593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.963 [2024-12-07 10:10:22.530827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.530843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.530941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.530961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.531143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.531158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.531242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.531256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.531409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.531423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.531594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.531609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.531826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.531841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.532008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.532026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.532108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.532122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.532284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.532298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.532467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.532481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.532746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.532760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.533019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.533034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.533149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.533163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.533334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.533348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.533517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.533530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.533785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.533798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.533969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.533983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.534182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.534196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.534413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.534427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.534696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.534711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.534798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.534812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.535074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.535088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.535258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.535272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.535464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.535478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.535591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.535605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.535846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.535860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.536028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.536042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.536165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.536179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.536278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.536292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.536443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.536457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.536625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.536639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.536791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.536805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.537039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.537055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.537253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.537267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-07 10:10:22.537366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-12-07 10:10:22.537380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.537499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.537513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.537680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.537695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.537862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.537876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.537988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.538003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.538116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.538131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.538237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.538251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.538488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.538504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.538751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.538766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.539013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.539029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.539115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.539130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.539284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.539299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.539416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.539433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.539686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.539701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.539852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.539867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.540086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.540102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.540220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.540234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.540404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.540418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.540526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.540540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.540731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.540745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.540965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.540980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.541160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.541174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.541338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.541352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.541461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.541476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.541644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.541659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.541848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.541862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.542105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.542120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.542296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.542311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.542487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.542502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.542766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.542780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.542966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.542981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.543172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.543187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.543423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.543438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-07 10:10:22.543615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-12-07 10:10:22.543631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.543808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.543822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.544082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.544097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.544277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.544292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.544406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.544420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.544634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.544649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.544805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.544820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.545060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.545076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.545317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.545333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.545513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.545528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.545707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.545721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.545956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.545972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.546181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.546198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.546389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.546403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.546698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.546714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.546929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.546945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.547149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.547165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.547354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.547369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.547528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.547543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.547789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.547808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.548047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.548063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.548231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.548246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.548495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.548510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.548626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.548640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.548868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.548882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.549042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.549058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.549164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.549178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.549402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.549416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.549609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.549623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.549802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.549817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.549978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.549992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.550239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.550254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.550404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.550417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.550665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.550679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 Malloc0 00:35:53.965 [2024-12-07 10:10:22.550860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.550875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.551027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.551042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-12-07 10:10:22.551267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-12-07 10:10:22.551281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.965 [2024-12-07 10:10:22.551519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.551534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.551792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.551806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:53.966 [2024-12-07 10:10:22.552044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.552059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.552174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.552188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.966 [2024-12-07 10:10:22.552410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.552424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.966 [2024-12-07 10:10:22.552705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.552720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.552888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.552902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.553180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.553198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.553393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.553408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.553590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.553604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.553826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.553840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.554054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.554069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.554257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.554271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.554403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:53.966 [2024-12-07 10:10:22.554511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.554526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.554774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.554788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.555050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.555064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.555280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.555295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.555405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.555419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.555599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.555613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.555764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.555778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.555944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.555967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.556139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.556154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.556302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.556317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.556487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.556501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.556617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.556632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.556781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.556796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.556959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.556974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.557167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.557181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.557345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.557360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.557619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.557634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.557800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.557815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.557989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.558007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.558180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.558195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.558390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.558406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.558644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.558658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-12-07 10:10:22.558876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-12-07 10:10:22.558890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.559073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.559088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.559324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.559339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.559579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.559595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.559813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.559828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.559997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.560013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.560252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.560267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.560450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.560464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.560776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.560791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.560996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.561011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.561208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.561224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.561438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.561454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.561720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.561735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.561895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.561910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.562153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.562169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.562331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.562345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.562451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.562465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.562703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.562718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.562881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.562898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.563069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.563084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.967 [2024-12-07 10:10:22.563272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.563288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.563399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.563414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:53.967 [2024-12-07 10:10:22.563683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.563697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.967 [2024-12-07 10:10:22.563893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.563909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.564019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.564037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.967 [2024-12-07 10:10:22.564256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.564272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.564493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.564508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.564680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.564694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.564804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.564818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.565032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.565047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.565314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.565329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.565572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.565586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.565789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.565804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.565990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.566005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.566099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.566114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-12-07 10:10:22.566338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-12-07 10:10:22.566354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.566563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.566577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.566731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.566745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.566937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.566955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.567200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.567215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.567380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.567395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.567557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.567572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.567837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.567851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.568018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.568033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.568191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.568205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.568472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.568486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.568705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.568721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.568985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.569001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.569151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.569166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.569408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.569422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.569642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.569657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.569835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.569850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.570027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.570043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.570292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.570307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.570547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.570562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.570724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.570740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.570962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.570978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.968 [2024-12-07 10:10:22.571149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.571165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.571330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.571344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:53.968 [2024-12-07 10:10:22.571520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.571536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.571688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.571703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.968 [2024-12-07 10:10:22.571878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.571892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.572007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.572022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.968 [2024-12-07 10:10:22.572265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.572280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.572372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.572386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.572610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.572627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.572809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.572825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.573043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.573057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.573294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.573309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.573469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.573484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.573603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.573617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-12-07 10:10:22.573789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-12-07 10:10:22.573804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.573959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.573974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.574130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.574145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.574413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.574429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.574718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.574737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.574894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.574908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.575080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.575095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.575269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.575284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.575402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.575417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.575635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.575649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.575876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.575892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.576063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.576081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.576199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.576213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.576402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.576417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.576637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.576652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.576921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.576937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.577112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.577126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.577355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.577373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.577631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.577648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.577899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.577915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.578064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.578080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.578297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.578313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.578534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.578549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.578650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.578664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.578901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.578915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.579004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.579019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.969 [2024-12-07 10:10:22.579187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.579202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.579330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.579344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:53.969 [2024-12-07 10:10:22.579583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.579600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.579701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.579715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-12-07 10:10:22.579822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.579839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.969 [2024-12-07 10:10:22.580000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.580016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.969 [2024-12-07 10:10:22.580231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-12-07 10:10:22.580247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.580442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-12-07 10:10:22.580455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.580695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-12-07 10:10:22.580710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.580872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-12-07 10:10:22.580887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.581042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-12-07 10:10:22.581061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.581232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-12-07 10:10:22.581246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.581465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-12-07 10:10:22.581480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.581649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-12-07 10:10:22.581664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.581898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-12-07 10:10:22.581913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.582103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-12-07 10:10:22.582119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efc04000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.582375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-12-07 10:10:22.582401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.582565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-12-07 10:10:22.582582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efbf8000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.582663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:53.970 [2024-12-07 10:10:22.585089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.970 [2024-12-07 10:10:22.585198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.970 [2024-12-07 10:10:22.585221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.970 [2024-12-07 10:10:22.585233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.970 [2024-12-07 10:10:22.585242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:53.970 [2024-12-07 10:10:22.585268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.970 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:53.970 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.970 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.970 [2024-12-07 10:10:22.595005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.970 [2024-12-07 10:10:22.595084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.970 [2024-12-07 10:10:22.595103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.970 [2024-12-07 10:10:22.595114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.970 [2024-12-07 10:10:22.595123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:53.970 [2024-12-07 10:10:22.595147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.970 10:10:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1472060 00:35:53.970 [2024-12-07 10:10:22.605013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.970 [2024-12-07 10:10:22.605105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.970 [2024-12-07 10:10:22.605121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.970 [2024-12-07 10:10:22.605129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.970 [2024-12-07 10:10:22.605136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:53.970 [2024-12-07 10:10:22.605158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.615037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.970 [2024-12-07 10:10:22.615106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.970 [2024-12-07 10:10:22.615127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.970 [2024-12-07 10:10:22.615135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.970 [2024-12-07 10:10:22.615142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:53.970 [2024-12-07 10:10:22.615158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.624963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.970 [2024-12-07 10:10:22.625023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.970 [2024-12-07 10:10:22.625038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.970 [2024-12-07 10:10:22.625048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.970 [2024-12-07 10:10:22.625056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:53.970 [2024-12-07 10:10:22.625072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-12-07 10:10:22.634903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.970 [2024-12-07 10:10:22.634967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.970 [2024-12-07 10:10:22.634981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.970 [2024-12-07 10:10:22.634989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.970 [2024-12-07 10:10:22.634997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:53.970 [2024-12-07 10:10:22.635012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:53.970 qpair failed and we were unable to recover it. 00:35:54.229 [2024-12-07 10:10:22.644963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.229 [2024-12-07 10:10:22.645025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.229 [2024-12-07 10:10:22.645045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.229 [2024-12-07 10:10:22.645052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.229 [2024-12-07 10:10:22.645059] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.229 [2024-12-07 10:10:22.645077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.229 qpair failed and we were unable to recover it. 00:35:54.229 [2024-12-07 10:10:22.655117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.229 [2024-12-07 10:10:22.655219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.229 [2024-12-07 10:10:22.655237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.229 [2024-12-07 10:10:22.655245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.229 [2024-12-07 10:10:22.655251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.655267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.665098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.665168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.665182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.665189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.230 [2024-12-07 10:10:22.665195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.665210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.675133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.675191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.675213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.675220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.230 [2024-12-07 10:10:22.675226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.675241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.685141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.685199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.685221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.685228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.230 [2024-12-07 10:10:22.685234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.685249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.695191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.695298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.695315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.695322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.230 [2024-12-07 10:10:22.695331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.695348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.705176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.705240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.705262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.705269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.230 [2024-12-07 10:10:22.705275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.705292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.715204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.715265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.715285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.715291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.230 [2024-12-07 10:10:22.715298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.715313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.725232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.725292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.725312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.725318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.230 [2024-12-07 10:10:22.725324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.725339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.735268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.735332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.735353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.735359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.230 [2024-12-07 10:10:22.735365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.735381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.745270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.745332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.745345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.745355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.230 [2024-12-07 10:10:22.745361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.745376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.755289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.755349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.755365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.755371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.230 [2024-12-07 10:10:22.755377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.755392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.765325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.765386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.765406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.765413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.230 [2024-12-07 10:10:22.765419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.765434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.775375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.775438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.775459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.775466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.230 [2024-12-07 10:10:22.775472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.230 [2024-12-07 10:10:22.775487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.230 qpair failed and we were unable to recover it. 00:35:54.230 [2024-12-07 10:10:22.785389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.230 [2024-12-07 10:10:22.785447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.230 [2024-12-07 10:10:22.785461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.230 [2024-12-07 10:10:22.785471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.785481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.785496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.795413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.795474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.795493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.795499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.795505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.795521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.805438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.805496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.805517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.805524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.805530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.805545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.815488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.815551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.815565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.815576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.815582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.815597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.825508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.825565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.825579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.825591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.825597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.825611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.835529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.835594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.835613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.835620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.835625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.835640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.845501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.845561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.845575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.845585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.845591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.845606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.855541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.855620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.855636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.855643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.855649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.855664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.865629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.865689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.865711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.865718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.865724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.865738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.875679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.875737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.875760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.875769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.875775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.875790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.885689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.885748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.885770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.885776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.885782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.885797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.895711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.895772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.895795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.895802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.895808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.895823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.905722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.905784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.905804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.905811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.905817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.231 [2024-12-07 10:10:22.905832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.231 qpair failed and we were unable to recover it. 00:35:54.231 [2024-12-07 10:10:22.915734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.231 [2024-12-07 10:10:22.915793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.231 [2024-12-07 10:10:22.915810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.231 [2024-12-07 10:10:22.915816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.231 [2024-12-07 10:10:22.915822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.232 [2024-12-07 10:10:22.915837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.232 qpair failed and we were unable to recover it. 00:35:54.232 [2024-12-07 10:10:22.925786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.232 [2024-12-07 10:10:22.925842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.232 [2024-12-07 10:10:22.925865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.232 [2024-12-07 10:10:22.925872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.232 [2024-12-07 10:10:22.925878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.232 [2024-12-07 10:10:22.925893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.232 qpair failed and we were unable to recover it. 00:35:54.232 [2024-12-07 10:10:22.935874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.232 [2024-12-07 10:10:22.935940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.232 [2024-12-07 10:10:22.935959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.232 [2024-12-07 10:10:22.935966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.232 [2024-12-07 10:10:22.935972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.232 [2024-12-07 10:10:22.935987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.232 qpair failed and we were unable to recover it. 00:35:54.232 [2024-12-07 10:10:22.945838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.232 [2024-12-07 10:10:22.945893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.232 [2024-12-07 10:10:22.945906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.232 [2024-12-07 10:10:22.945921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.232 [2024-12-07 10:10:22.945927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.232 [2024-12-07 10:10:22.945942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.232 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-07 10:10:22.955858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-07 10:10:22.955916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-07 10:10:22.955939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-07 10:10:22.955950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-07 10:10:22.955957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.491 [2024-12-07 10:10:22.955974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-07 10:10:22.965915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-07 10:10:22.965982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-07 10:10:22.965999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-07 10:10:22.966010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-07 10:10:22.966017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.491 [2024-12-07 10:10:22.966034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-07 10:10:22.975932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-07 10:10:22.976002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-07 10:10:22.976017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-07 10:10:22.976025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-07 10:10:22.976031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.491 [2024-12-07 10:10:22.976047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-07 10:10:22.985951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-07 10:10:22.986011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-07 10:10:22.986034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-07 10:10:22.986041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-07 10:10:22.986047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.491 [2024-12-07 10:10:22.986062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-07 10:10:22.995939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-07 10:10:22.996003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-07 10:10:22.996016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-07 10:10:22.996029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-07 10:10:22.996035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.491 [2024-12-07 10:10:22.996051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-07 10:10:23.006019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-07 10:10:23.006081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-07 10:10:23.006100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-07 10:10:23.006108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-07 10:10:23.006114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.491 [2024-12-07 10:10:23.006129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-07 10:10:23.016037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-07 10:10:23.016105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-07 10:10:23.016123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-07 10:10:23.016129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-07 10:10:23.016135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.491 [2024-12-07 10:10:23.016151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-07 10:10:23.026058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-07 10:10:23.026121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-07 10:10:23.026140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-07 10:10:23.026147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-07 10:10:23.026153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.491 [2024-12-07 10:10:23.026167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-07 10:10:23.036076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-07 10:10:23.036138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-07 10:10:23.036151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-07 10:10:23.036158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-07 10:10:23.036164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.491 [2024-12-07 10:10:23.036179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-07 10:10:23.046116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-07 10:10:23.046177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-07 10:10:23.046198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-07 10:10:23.046205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-07 10:10:23.046211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.491 [2024-12-07 10:10:23.046225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-07 10:10:23.056130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-07 10:10:23.056189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-07 10:10:23.056214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-07 10:10:23.056220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-07 10:10:23.056226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.056241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.066162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.066221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.066236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.066249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.066255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.066269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.076187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.076249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.076268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.076276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.076284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.076299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.086218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.086311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.086325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.086332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.086338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.086354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.096200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.096265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.096285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.096293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.096299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.096318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.106278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.106338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.106353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.106362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.106369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.106384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.116255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.116317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.116331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.116338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.116345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.116360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.126330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.126382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.126396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.126410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.126416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.126431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.136297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.136357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.136381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.136388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.136394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.136408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.146324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.146386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.146407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.146414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.146420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.146435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.156384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.156451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.156464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.156471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.156477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.156492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.166395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.166452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.166465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.166474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.166480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.166495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.176504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.176574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.176588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.176595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.176601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.176615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-12-07 10:10:23.186444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.492 [2024-12-07 10:10:23.186506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.492 [2024-12-07 10:10:23.186527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.492 [2024-12-07 10:10:23.186534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.492 [2024-12-07 10:10:23.186540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.492 [2024-12-07 10:10:23.186559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.493 [2024-12-07 10:10:23.196477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.493 [2024-12-07 10:10:23.196535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.493 [2024-12-07 10:10:23.196550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.493 [2024-12-07 10:10:23.196560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.493 [2024-12-07 10:10:23.196566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.493 [2024-12-07 10:10:23.196581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-12-07 10:10:23.206574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.493 [2024-12-07 10:10:23.206633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.493 [2024-12-07 10:10:23.206655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.493 [2024-12-07 10:10:23.206662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.493 [2024-12-07 10:10:23.206668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.493 [2024-12-07 10:10:23.206683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.752 [2024-12-07 10:10:23.216538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.752 [2024-12-07 10:10:23.216598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.752 [2024-12-07 10:10:23.216624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.752 [2024-12-07 10:10:23.216631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.752 [2024-12-07 10:10:23.216637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.752 [2024-12-07 10:10:23.216654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.752 qpair failed and we were unable to recover it. 00:35:54.752 [2024-12-07 10:10:23.226617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.752 [2024-12-07 10:10:23.226683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.752 [2024-12-07 10:10:23.226700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.752 [2024-12-07 10:10:23.226707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.752 [2024-12-07 10:10:23.226714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.752 [2024-12-07 10:10:23.226731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.752 qpair failed and we were unable to recover it. 00:35:54.752 [2024-12-07 10:10:23.236592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.752 [2024-12-07 10:10:23.236648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.752 [2024-12-07 10:10:23.236672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.752 [2024-12-07 10:10:23.236679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.752 [2024-12-07 10:10:23.236685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.752 [2024-12-07 10:10:23.236700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.752 qpair failed and we were unable to recover it. 00:35:54.752 [2024-12-07 10:10:23.246634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.752 [2024-12-07 10:10:23.246697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.752 [2024-12-07 10:10:23.246711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.752 [2024-12-07 10:10:23.246725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.752 [2024-12-07 10:10:23.246731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.752 [2024-12-07 10:10:23.246746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.752 qpair failed and we were unable to recover it. 00:35:54.752 [2024-12-07 10:10:23.256651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.752 [2024-12-07 10:10:23.256712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.752 [2024-12-07 10:10:23.256727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.752 [2024-12-07 10:10:23.256736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.752 [2024-12-07 10:10:23.256742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.752 [2024-12-07 10:10:23.256757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.752 qpair failed and we were unable to recover it. 00:35:54.752 [2024-12-07 10:10:23.266688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.752 [2024-12-07 10:10:23.266749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.752 [2024-12-07 10:10:23.266770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.266777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.266783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.266798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.276788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.276857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.276871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.276877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.276886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.276902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.286740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.286800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.286815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.286821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.286827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.286842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.296838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.296898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.296921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.296927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.296933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.296953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.306861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.306918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.306932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.306945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.306956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.306971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.316884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.316944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.316963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.316969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.316975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.316991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.326919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.326984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.327003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.327010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.327016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.327031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.336945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.337013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.337034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.337041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.337047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.337061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.346928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.346995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.347011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.347017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.347023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.347039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.357000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.357059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.357081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.357087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.357093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.357108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.367026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.367103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.367116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.367126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.367133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.367147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.377127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.377233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.377247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.377253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.377259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.377274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.387026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.387127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.387141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-07 10:10:23.387147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-07 10:10:23.387154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.753 [2024-12-07 10:10:23.387168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-07 10:10:23.397154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-07 10:10:23.397223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-07 10:10:23.397242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-07 10:10:23.397248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-07 10:10:23.397254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.754 [2024-12-07 10:10:23.397269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-07 10:10:23.407158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-07 10:10:23.407217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-07 10:10:23.407239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-07 10:10:23.407245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-07 10:10:23.407251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.754 [2024-12-07 10:10:23.407265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-07 10:10:23.417201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-07 10:10:23.417274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-07 10:10:23.417288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-07 10:10:23.417294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-07 10:10:23.417300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.754 [2024-12-07 10:10:23.417315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-07 10:10:23.427264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-07 10:10:23.427322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-07 10:10:23.427336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-07 10:10:23.427348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-07 10:10:23.427354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.754 [2024-12-07 10:10:23.427368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-07 10:10:23.437194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-07 10:10:23.437249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-07 10:10:23.437263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-07 10:10:23.437273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-07 10:10:23.437279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.754 [2024-12-07 10:10:23.437294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-07 10:10:23.447244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-07 10:10:23.447346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-07 10:10:23.447359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-07 10:10:23.447366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-07 10:10:23.447372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.754 [2024-12-07 10:10:23.447387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-07 10:10:23.457258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-07 10:10:23.457322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-07 10:10:23.457341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-07 10:10:23.457350] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-07 10:10:23.457356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.754 [2024-12-07 10:10:23.457371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-07 10:10:23.467312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-07 10:10:23.467376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-07 10:10:23.467396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-07 10:10:23.467403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-07 10:10:23.467409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:54.754 [2024-12-07 10:10:23.467424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.754 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-07 10:10:23.477310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.012 [2024-12-07 10:10:23.477369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.012 [2024-12-07 10:10:23.477393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.012 [2024-12-07 10:10:23.477400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.012 [2024-12-07 10:10:23.477406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.012 [2024-12-07 10:10:23.477423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-07 10:10:23.487383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.012 [2024-12-07 10:10:23.487442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.012 [2024-12-07 10:10:23.487464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.012 [2024-12-07 10:10:23.487471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.012 [2024-12-07 10:10:23.487477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.012 [2024-12-07 10:10:23.487494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-07 10:10:23.497455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.012 [2024-12-07 10:10:23.497519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.012 [2024-12-07 10:10:23.497539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.012 [2024-12-07 10:10:23.497546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.012 [2024-12-07 10:10:23.497552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.012 [2024-12-07 10:10:23.497567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-07 10:10:23.507386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.012 [2024-12-07 10:10:23.507448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.012 [2024-12-07 10:10:23.507469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.012 [2024-12-07 10:10:23.507475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.012 [2024-12-07 10:10:23.507482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.012 [2024-12-07 10:10:23.507497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-07 10:10:23.517462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.012 [2024-12-07 10:10:23.517522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.012 [2024-12-07 10:10:23.517544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.012 [2024-12-07 10:10:23.517550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.012 [2024-12-07 10:10:23.517556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.012 [2024-12-07 10:10:23.517571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-07 10:10:23.527484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.012 [2024-12-07 10:10:23.527542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.012 [2024-12-07 10:10:23.527556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.012 [2024-12-07 10:10:23.527563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.012 [2024-12-07 10:10:23.527571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.012 [2024-12-07 10:10:23.527586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-07 10:10:23.537533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.012 [2024-12-07 10:10:23.537592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.012 [2024-12-07 10:10:23.537607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.012 [2024-12-07 10:10:23.537617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.012 [2024-12-07 10:10:23.537623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.012 [2024-12-07 10:10:23.537638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-07 10:10:23.547563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.012 [2024-12-07 10:10:23.547625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.012 [2024-12-07 10:10:23.547650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.012 [2024-12-07 10:10:23.547656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.012 [2024-12-07 10:10:23.547662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.012 [2024-12-07 10:10:23.547678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-07 10:10:23.557575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.012 [2024-12-07 10:10:23.557639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.012 [2024-12-07 10:10:23.557673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.012 [2024-12-07 10:10:23.557685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.012 [2024-12-07 10:10:23.557691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.012 [2024-12-07 10:10:23.557715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-07 10:10:23.567613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.012 [2024-12-07 10:10:23.567679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.012 [2024-12-07 10:10:23.567696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.012 [2024-12-07 10:10:23.567702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.012 [2024-12-07 10:10:23.567708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.567724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.577646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.577703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.577717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.577732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.577738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.577753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.587699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.587760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.587781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.587787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.587793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.587811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.597688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.597746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.597760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.597770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.597776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.597791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.607724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.607778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.607791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.607797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.607806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.607822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.617769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.617828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.617851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.617858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.617864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.617879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.627791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.627851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.627864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.627875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.627881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.627895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.637808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.637868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.637891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.637897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.637903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.637918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.647839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.647894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.647908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.647919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.647925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.647939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.657822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.657883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.657905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.657912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.657918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.657932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.667929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.667994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.668012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.668019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.668024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.668039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.677936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.678011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.678025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.678031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.678037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.678055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.687968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.688024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.688038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.688052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.688058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.013 [2024-12-07 10:10:23.688073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-07 10:10:23.698000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.013 [2024-12-07 10:10:23.698062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.013 [2024-12-07 10:10:23.698076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.013 [2024-12-07 10:10:23.698086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.013 [2024-12-07 10:10:23.698091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.014 [2024-12-07 10:10:23.698107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-07 10:10:23.708017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.014 [2024-12-07 10:10:23.708082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.014 [2024-12-07 10:10:23.708104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.014 [2024-12-07 10:10:23.708111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.014 [2024-12-07 10:10:23.708117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.014 [2024-12-07 10:10:23.708132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-07 10:10:23.718039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.014 [2024-12-07 10:10:23.718096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.014 [2024-12-07 10:10:23.718109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.014 [2024-12-07 10:10:23.718121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.014 [2024-12-07 10:10:23.718127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.014 [2024-12-07 10:10:23.718142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-07 10:10:23.728075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.014 [2024-12-07 10:10:23.728135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.014 [2024-12-07 10:10:23.728159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.014 [2024-12-07 10:10:23.728166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.014 [2024-12-07 10:10:23.728171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.014 [2024-12-07 10:10:23.728186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.272 [2024-12-07 10:10:23.738110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.272 [2024-12-07 10:10:23.738171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.272 [2024-12-07 10:10:23.738197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.272 [2024-12-07 10:10:23.738204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.272 [2024-12-07 10:10:23.738211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.272 [2024-12-07 10:10:23.738228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.748109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.748169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.748192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.748199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.748205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.273 [2024-12-07 10:10:23.748222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.758155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.758216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.758230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.758242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.758248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.273 [2024-12-07 10:10:23.758263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.768238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.768316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.768329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.768336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.768346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.273 [2024-12-07 10:10:23.768361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.778222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.778284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.778298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.778305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.778316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.273 [2024-12-07 10:10:23.778331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.788244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.788303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.788317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.788329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.788335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.273 [2024-12-07 10:10:23.788350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.798272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.798330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.798344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.798357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.798363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.273 [2024-12-07 10:10:23.798378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.808311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.808395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.808408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.808415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.808420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.273 [2024-12-07 10:10:23.808436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.818347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.818415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.818437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.818443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.818449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.273 [2024-12-07 10:10:23.818464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.828402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.828462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.828476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.828488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.828495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.273 [2024-12-07 10:10:23.828511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.838392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.838447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.838460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.838474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.838480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.273 [2024-12-07 10:10:23.838495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.848408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.848496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.848509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.848516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.848521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.273 [2024-12-07 10:10:23.848536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.858463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.858529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.858543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.858552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.858561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.273 [2024-12-07 10:10:23.858576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-12-07 10:10:23.868472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.273 [2024-12-07 10:10:23.868578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.273 [2024-12-07 10:10:23.868591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.273 [2024-12-07 10:10:23.868597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.273 [2024-12-07 10:10:23.868604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.274 [2024-12-07 10:10:23.868619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-12-07 10:10:23.878422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.274 [2024-12-07 10:10:23.878483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.274 [2024-12-07 10:10:23.878504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.274 [2024-12-07 10:10:23.878511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.274 [2024-12-07 10:10:23.878517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.274 [2024-12-07 10:10:23.878531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-12-07 10:10:23.888521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.274 [2024-12-07 10:10:23.888581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.274 [2024-12-07 10:10:23.888594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.274 [2024-12-07 10:10:23.888607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.274 [2024-12-07 10:10:23.888613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.274 [2024-12-07 10:10:23.888627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-12-07 10:10:23.898564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.274 [2024-12-07 10:10:23.898624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.274 [2024-12-07 10:10:23.898640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.274 [2024-12-07 10:10:23.898652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.274 [2024-12-07 10:10:23.898658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbf8000b90 00:35:55.274 [2024-12-07 10:10:23.898673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-12-07 10:10:23.908586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.274 [2024-12-07 10:10:23.908680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.274 [2024-12-07 10:10:23.908708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.274 [2024-12-07 10:10:23.908720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.274 [2024-12-07 10:10:23.908729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.274 [2024-12-07 10:10:23.908754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-12-07 10:10:23.918611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.274 [2024-12-07 10:10:23.918676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.274 [2024-12-07 10:10:23.918694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.274 [2024-12-07 10:10:23.918702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.274 [2024-12-07 10:10:23.918708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.274 [2024-12-07 10:10:23.918723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-12-07 10:10:23.928628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.274 [2024-12-07 10:10:23.928683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.274 [2024-12-07 10:10:23.928698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.274 [2024-12-07 10:10:23.928704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.274 [2024-12-07 10:10:23.928710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.274 [2024-12-07 10:10:23.928726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-12-07 10:10:23.938736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.274 [2024-12-07 10:10:23.938800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.274 [2024-12-07 10:10:23.938820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.274 [2024-12-07 10:10:23.938827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.274 [2024-12-07 10:10:23.938832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.274 [2024-12-07 10:10:23.938848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-12-07 10:10:23.948705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.274 [2024-12-07 10:10:23.948766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.274 [2024-12-07 10:10:23.948790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.274 [2024-12-07 10:10:23.948801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.274 [2024-12-07 10:10:23.948807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.274 [2024-12-07 10:10:23.948821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-12-07 10:10:23.958795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.274 [2024-12-07 10:10:23.958903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.274 [2024-12-07 10:10:23.958918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.274 [2024-12-07 10:10:23.958925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.274 [2024-12-07 10:10:23.958932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.274 [2024-12-07 10:10:23.958950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-12-07 10:10:23.968735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.274 [2024-12-07 10:10:23.968794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.274 [2024-12-07 10:10:23.968808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.274 [2024-12-07 10:10:23.968814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.274 [2024-12-07 10:10:23.968820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.274 [2024-12-07 10:10:23.968834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-12-07 10:10:23.978807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.274 [2024-12-07 10:10:23.978868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.274 [2024-12-07 10:10:23.978883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.274 [2024-12-07 10:10:23.978895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.274 [2024-12-07 10:10:23.978901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.274 [2024-12-07 10:10:23.978916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-12-07 10:10:23.988819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.274 [2024-12-07 10:10:23.988883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.274 [2024-12-07 10:10:23.988904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.274 [2024-12-07 10:10:23.988910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.274 [2024-12-07 10:10:23.988916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.274 [2024-12-07 10:10:23.988931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.533 [2024-12-07 10:10:23.998847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:23.998902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:23.998921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:23.998931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:23.998937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:23.998957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.008878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:24.008932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:24.008952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:24.008960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:24.008965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:24.008981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.018909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:24.018971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:24.018995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:24.019002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:24.019008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:24.019022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.028931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:24.029003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:24.029018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:24.029025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:24.029031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:24.029045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.038966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:24.039021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:24.039036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:24.039051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:24.039057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:24.039072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.048989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:24.049048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:24.049071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:24.049078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:24.049084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:24.049098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.059075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:24.059171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:24.059186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:24.059192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:24.059198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:24.059213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.069044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:24.069106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:24.069121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:24.069136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:24.069142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:24.069156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.079083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:24.079142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:24.079159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:24.079169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:24.079175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:24.079191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.089103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:24.089164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:24.089187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:24.089194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:24.089200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:24.089214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.099171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:24.099257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:24.099272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:24.099279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:24.099284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:24.099299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.109174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:24.109237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:24.109251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:24.109260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:24.109267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:24.109281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.119213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.534 [2024-12-07 10:10:24.119269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.534 [2024-12-07 10:10:24.119284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.534 [2024-12-07 10:10:24.119296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.534 [2024-12-07 10:10:24.119302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.534 [2024-12-07 10:10:24.119316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.534 qpair failed and we were unable to recover it. 00:35:55.534 [2024-12-07 10:10:24.129174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.129265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.129280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.129290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.129296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.129311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.535 [2024-12-07 10:10:24.139277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.139338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.139352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.139366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.139372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.139386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.535 [2024-12-07 10:10:24.149280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.149340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.149355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.149364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.149370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.149385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.535 [2024-12-07 10:10:24.159320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.159399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.159413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.159420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.159426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.159440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.535 [2024-12-07 10:10:24.169337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.169394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.169409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.169420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.169426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.169441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.535 [2024-12-07 10:10:24.179379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.179441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.179455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.179467] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.179473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.179487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.535 [2024-12-07 10:10:24.189400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.189457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.189471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.189478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.189487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.189500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.535 [2024-12-07 10:10:24.199460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.199567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.199580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.199587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.199593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.199607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.535 [2024-12-07 10:10:24.209452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.209510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.209524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.209535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.209540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.209555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.535 [2024-12-07 10:10:24.219487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.219555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.219572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.219582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.219588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.219602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.535 [2024-12-07 10:10:24.229559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.229658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.229673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.229680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.229685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.229700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.535 [2024-12-07 10:10:24.239519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.239575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.239590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.239603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.239608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.239622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.535 [2024-12-07 10:10:24.249555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.535 [2024-12-07 10:10:24.249613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.535 [2024-12-07 10:10:24.249628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.535 [2024-12-07 10:10:24.249639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.535 [2024-12-07 10:10:24.249645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.535 [2024-12-07 10:10:24.249659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.535 qpair failed and we were unable to recover it. 00:35:55.794 [2024-12-07 10:10:24.259593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.794 [2024-12-07 10:10:24.259657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.794 [2024-12-07 10:10:24.259680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.794 [2024-12-07 10:10:24.259687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.794 [2024-12-07 10:10:24.259693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.794 [2024-12-07 10:10:24.259709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-12-07 10:10:24.269666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.794 [2024-12-07 10:10:24.269774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.794 [2024-12-07 10:10:24.269792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.794 [2024-12-07 10:10:24.269799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.794 [2024-12-07 10:10:24.269805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.794 [2024-12-07 10:10:24.269820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-12-07 10:10:24.279578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.794 [2024-12-07 10:10:24.279640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.794 [2024-12-07 10:10:24.279663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.794 [2024-12-07 10:10:24.279669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.794 [2024-12-07 10:10:24.279675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.794 [2024-12-07 10:10:24.279690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-12-07 10:10:24.289600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.794 [2024-12-07 10:10:24.289662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.794 [2024-12-07 10:10:24.289685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.289692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.289699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.289713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.299637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.299698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.299714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.299724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.299730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.299745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.309663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.309731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.309755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.309761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.309768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.309782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.319690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.319748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.319764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.319771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.319777] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.319791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.329714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.329771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.329786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.329796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.329802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.329816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.339820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.339880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.339895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.339911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.339916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.339931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.349847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.349904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.349919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.349934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.349940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.349959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.359887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.359954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.359971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.359978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.359983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.359998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.369938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.370008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.370027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.370033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.370039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.370054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.379967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.380030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.380044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.380055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.380061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.380076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.389969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.390028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.390043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.390056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.390062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.390077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.400028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.400090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.400110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.400117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.400123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.400137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.410024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.410083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.410105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.410111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.410117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.795 [2024-12-07 10:10:24.410131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-12-07 10:10:24.420081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.795 [2024-12-07 10:10:24.420186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.795 [2024-12-07 10:10:24.420200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.795 [2024-12-07 10:10:24.420207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.795 [2024-12-07 10:10:24.420213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.796 [2024-12-07 10:10:24.420227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-12-07 10:10:24.430141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.796 [2024-12-07 10:10:24.430204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.796 [2024-12-07 10:10:24.430223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.796 [2024-12-07 10:10:24.430230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.796 [2024-12-07 10:10:24.430235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.796 [2024-12-07 10:10:24.430250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-12-07 10:10:24.440111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.796 [2024-12-07 10:10:24.440167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.796 [2024-12-07 10:10:24.440182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.796 [2024-12-07 10:10:24.440194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.796 [2024-12-07 10:10:24.440200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.796 [2024-12-07 10:10:24.440218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-12-07 10:10:24.450135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.796 [2024-12-07 10:10:24.450196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.796 [2024-12-07 10:10:24.450216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.796 [2024-12-07 10:10:24.450222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.796 [2024-12-07 10:10:24.450228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.796 [2024-12-07 10:10:24.450242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-12-07 10:10:24.460213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.796 [2024-12-07 10:10:24.460274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.796 [2024-12-07 10:10:24.460288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.796 [2024-12-07 10:10:24.460298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.796 [2024-12-07 10:10:24.460304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.796 [2024-12-07 10:10:24.460318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-12-07 10:10:24.470209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.796 [2024-12-07 10:10:24.470271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.796 [2024-12-07 10:10:24.470293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.796 [2024-12-07 10:10:24.470299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.796 [2024-12-07 10:10:24.470305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.796 [2024-12-07 10:10:24.470319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-12-07 10:10:24.480240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.796 [2024-12-07 10:10:24.480298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.796 [2024-12-07 10:10:24.480321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.796 [2024-12-07 10:10:24.480327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.796 [2024-12-07 10:10:24.480333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.796 [2024-12-07 10:10:24.480348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-12-07 10:10:24.490265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.796 [2024-12-07 10:10:24.490324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.796 [2024-12-07 10:10:24.490345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.796 [2024-12-07 10:10:24.490352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.796 [2024-12-07 10:10:24.490357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.796 [2024-12-07 10:10:24.490372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-12-07 10:10:24.500346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.796 [2024-12-07 10:10:24.500408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.796 [2024-12-07 10:10:24.500422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.796 [2024-12-07 10:10:24.500433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.796 [2024-12-07 10:10:24.500439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.796 [2024-12-07 10:10:24.500454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-12-07 10:10:24.510297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.796 [2024-12-07 10:10:24.510358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.796 [2024-12-07 10:10:24.510372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.796 [2024-12-07 10:10:24.510385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.796 [2024-12-07 10:10:24.510391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:55.796 [2024-12-07 10:10:24.510405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.796 qpair failed and we were unable to recover it. 00:35:56.055 [2024-12-07 10:10:24.520330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.055 [2024-12-07 10:10:24.520390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.055 [2024-12-07 10:10:24.520413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.055 [2024-12-07 10:10:24.520419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.055 [2024-12-07 10:10:24.520425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.055 [2024-12-07 10:10:24.520441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-12-07 10:10:24.530385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.055 [2024-12-07 10:10:24.530448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.055 [2024-12-07 10:10:24.530470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.055 [2024-12-07 10:10:24.530477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.055 [2024-12-07 10:10:24.530483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.055 [2024-12-07 10:10:24.530502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-12-07 10:10:24.540455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.055 [2024-12-07 10:10:24.540517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.055 [2024-12-07 10:10:24.540540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.055 [2024-12-07 10:10:24.540546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.055 [2024-12-07 10:10:24.540552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.055 [2024-12-07 10:10:24.540567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-12-07 10:10:24.550422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.055 [2024-12-07 10:10:24.550481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.055 [2024-12-07 10:10:24.550496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.055 [2024-12-07 10:10:24.550507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.055 [2024-12-07 10:10:24.550513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.055 [2024-12-07 10:10:24.550528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-12-07 10:10:24.560466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.055 [2024-12-07 10:10:24.560526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.055 [2024-12-07 10:10:24.560549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.055 [2024-12-07 10:10:24.560555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.055 [2024-12-07 10:10:24.560561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.055 [2024-12-07 10:10:24.560575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.570446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.570509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.570523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.570530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.570536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.570551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.580583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.580644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.580663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.580671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.580677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.580692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.590499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.590579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.590594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.590600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.590606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.590620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.600589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.600648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.600663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.600674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.600680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.600694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.610552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.610610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.610634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.610641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.610647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.610663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.620610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.620671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.620694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.620701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.620707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.620725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.630699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.630762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.630785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.630791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.630797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.630811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.640729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.640821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.640836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.640843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.640849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.640864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.650718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.650795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.650810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.650817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.650822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.650837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.660771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.660833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.660855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.660861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.660867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.660882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.670797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.670856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.670874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.670881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.670887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.670902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.680783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.680841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.680856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.680868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.680874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.680888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.690865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.690924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.056 [2024-12-07 10:10:24.690952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.056 [2024-12-07 10:10:24.690959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.056 [2024-12-07 10:10:24.690965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.056 [2024-12-07 10:10:24.690979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-12-07 10:10:24.700888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.056 [2024-12-07 10:10:24.700972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.057 [2024-12-07 10:10:24.700987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.057 [2024-12-07 10:10:24.700993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.057 [2024-12-07 10:10:24.700999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.057 [2024-12-07 10:10:24.701013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-12-07 10:10:24.710877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.057 [2024-12-07 10:10:24.710958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.057 [2024-12-07 10:10:24.710972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.057 [2024-12-07 10:10:24.710979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.057 [2024-12-07 10:10:24.710987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.057 [2024-12-07 10:10:24.711002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-12-07 10:10:24.720956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.057 [2024-12-07 10:10:24.721015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.057 [2024-12-07 10:10:24.721029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.057 [2024-12-07 10:10:24.721036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.057 [2024-12-07 10:10:24.721041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.057 [2024-12-07 10:10:24.721055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-12-07 10:10:24.730960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.057 [2024-12-07 10:10:24.731020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.057 [2024-12-07 10:10:24.731041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.057 [2024-12-07 10:10:24.731047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.057 [2024-12-07 10:10:24.731052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.057 [2024-12-07 10:10:24.731067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-12-07 10:10:24.741019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.057 [2024-12-07 10:10:24.741095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.057 [2024-12-07 10:10:24.741115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.057 [2024-12-07 10:10:24.741122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.057 [2024-12-07 10:10:24.741128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.057 [2024-12-07 10:10:24.741142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-12-07 10:10:24.751077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.057 [2024-12-07 10:10:24.751144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.057 [2024-12-07 10:10:24.751161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.057 [2024-12-07 10:10:24.751168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.057 [2024-12-07 10:10:24.751174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.057 [2024-12-07 10:10:24.751189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-12-07 10:10:24.761053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.057 [2024-12-07 10:10:24.761108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.057 [2024-12-07 10:10:24.761134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.057 [2024-12-07 10:10:24.761141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.057 [2024-12-07 10:10:24.761147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.057 [2024-12-07 10:10:24.761163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-12-07 10:10:24.771097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.057 [2024-12-07 10:10:24.771162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.057 [2024-12-07 10:10:24.771183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.057 [2024-12-07 10:10:24.771189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.057 [2024-12-07 10:10:24.771196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.057 [2024-12-07 10:10:24.771211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.315 [2024-12-07 10:10:24.781139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.315 [2024-12-07 10:10:24.781202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.315 [2024-12-07 10:10:24.781228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.315 [2024-12-07 10:10:24.781235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.315 [2024-12-07 10:10:24.781241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.315 [2024-12-07 10:10:24.781257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.315 qpair failed and we were unable to recover it. 00:35:56.315 [2024-12-07 10:10:24.791133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.791193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.791216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.791223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.791229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.791245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.316 [2024-12-07 10:10:24.801105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.801164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.801179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.801186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.801205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.801220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.316 [2024-12-07 10:10:24.811133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.811189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.811204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.811218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.811224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.811238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.316 [2024-12-07 10:10:24.821226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.821296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.821316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.821323] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.821329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.821344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.316 [2024-12-07 10:10:24.831270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.831372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.831389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.831396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.831401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.831416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.316 [2024-12-07 10:10:24.841217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.841273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.841288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.841298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.841304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.841318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.316 [2024-12-07 10:10:24.851274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.851335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.851358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.851365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.851371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.851385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.316 [2024-12-07 10:10:24.861351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.861416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.861436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.861443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.861449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.861464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.316 [2024-12-07 10:10:24.871399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.871461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.871483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.871489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.871495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.871510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.316 [2024-12-07 10:10:24.881404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.881463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.881484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.881491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.881497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.881511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.316 [2024-12-07 10:10:24.891425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.891499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.891513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.891520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.891529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.891544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.316 [2024-12-07 10:10:24.901464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.901529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.901543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.901554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.901560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.901574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.316 [2024-12-07 10:10:24.911504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.316 [2024-12-07 10:10:24.911574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.316 [2024-12-07 10:10:24.911588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.316 [2024-12-07 10:10:24.911595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.316 [2024-12-07 10:10:24.911601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.316 [2024-12-07 10:10:24.911615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.316 qpair failed and we were unable to recover it. 00:35:56.317 [2024-12-07 10:10:24.921507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.317 [2024-12-07 10:10:24.921564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.317 [2024-12-07 10:10:24.921578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.317 [2024-12-07 10:10:24.921588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.317 [2024-12-07 10:10:24.921594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.317 [2024-12-07 10:10:24.921607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.317 qpair failed and we were unable to recover it. 00:35:56.317 [2024-12-07 10:10:24.931577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.317 [2024-12-07 10:10:24.931638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.317 [2024-12-07 10:10:24.931657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.317 [2024-12-07 10:10:24.931664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.317 [2024-12-07 10:10:24.931670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.317 [2024-12-07 10:10:24.931684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.317 qpair failed and we were unable to recover it. 00:35:56.317 [2024-12-07 10:10:24.941591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.317 [2024-12-07 10:10:24.941669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.317 [2024-12-07 10:10:24.941684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.317 [2024-12-07 10:10:24.941691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.317 [2024-12-07 10:10:24.941696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.317 [2024-12-07 10:10:24.941711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.317 qpair failed and we were unable to recover it. 00:35:56.317 [2024-12-07 10:10:24.951676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.317 [2024-12-07 10:10:24.951733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.317 [2024-12-07 10:10:24.951747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.317 [2024-12-07 10:10:24.951760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.317 [2024-12-07 10:10:24.951766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.317 [2024-12-07 10:10:24.951780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.317 qpair failed and we were unable to recover it. 00:35:56.317 [2024-12-07 10:10:24.961653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.317 [2024-12-07 10:10:24.961711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.317 [2024-12-07 10:10:24.961725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.317 [2024-12-07 10:10:24.961735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.317 [2024-12-07 10:10:24.961741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.317 [2024-12-07 10:10:24.961754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.317 qpair failed and we were unable to recover it. 00:35:56.317 [2024-12-07 10:10:24.971675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.317 [2024-12-07 10:10:24.971763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.317 [2024-12-07 10:10:24.971778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.317 [2024-12-07 10:10:24.971784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.317 [2024-12-07 10:10:24.971790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.317 [2024-12-07 10:10:24.971805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.317 qpair failed and we were unable to recover it. 00:35:56.317 [2024-12-07 10:10:24.981704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.317 [2024-12-07 10:10:24.981766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.317 [2024-12-07 10:10:24.981789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.317 [2024-12-07 10:10:24.981795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.317 [2024-12-07 10:10:24.981804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.317 [2024-12-07 10:10:24.981820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.317 qpair failed and we were unable to recover it. 00:35:56.317 [2024-12-07 10:10:24.991711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.317 [2024-12-07 10:10:24.991770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.317 [2024-12-07 10:10:24.991785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.317 [2024-12-07 10:10:24.991798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.317 [2024-12-07 10:10:24.991804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.317 [2024-12-07 10:10:24.991818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.317 qpair failed and we were unable to recover it. 00:35:56.317 [2024-12-07 10:10:25.001739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.317 [2024-12-07 10:10:25.001798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.317 [2024-12-07 10:10:25.001820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.317 [2024-12-07 10:10:25.001827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.317 [2024-12-07 10:10:25.001833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.317 [2024-12-07 10:10:25.001847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.317 qpair failed and we were unable to recover it. 00:35:56.317 [2024-12-07 10:10:25.011764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.317 [2024-12-07 10:10:25.011822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.317 [2024-12-07 10:10:25.011844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.317 [2024-12-07 10:10:25.011851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.317 [2024-12-07 10:10:25.011856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.317 [2024-12-07 10:10:25.011870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.317 qpair failed and we were unable to recover it. 00:35:56.317 [2024-12-07 10:10:25.021805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.317 [2024-12-07 10:10:25.021866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.317 [2024-12-07 10:10:25.021888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.317 [2024-12-07 10:10:25.021894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.317 [2024-12-07 10:10:25.021900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.317 [2024-12-07 10:10:25.021915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.317 qpair failed and we were unable to recover it. 00:35:56.317 [2024-12-07 10:10:25.031827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.317 [2024-12-07 10:10:25.031893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.317 [2024-12-07 10:10:25.031912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.317 [2024-12-07 10:10:25.031919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.317 [2024-12-07 10:10:25.031925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.317 [2024-12-07 10:10:25.031939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.317 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.041845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.041909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.041926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.041933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.577 [2024-12-07 10:10:25.041939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.577 [2024-12-07 10:10:25.041961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.577 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.051897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.051959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.051979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.051986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.577 [2024-12-07 10:10:25.051992] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.577 [2024-12-07 10:10:25.052008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.577 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.061913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.061973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.061996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.062003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.577 [2024-12-07 10:10:25.062009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.577 [2024-12-07 10:10:25.062024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.577 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.071930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.072029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.072044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.072055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.577 [2024-12-07 10:10:25.072061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.577 [2024-12-07 10:10:25.072076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.577 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.081898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.081993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.082011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.082019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.577 [2024-12-07 10:10:25.082026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.577 [2024-12-07 10:10:25.082041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.577 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.091975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.092031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.092046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.092057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.577 [2024-12-07 10:10:25.092062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.577 [2024-12-07 10:10:25.092077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.577 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.102032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.102095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.102109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.102119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.577 [2024-12-07 10:10:25.102125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.577 [2024-12-07 10:10:25.102139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.577 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.112061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.112137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.112151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.112157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.577 [2024-12-07 10:10:25.112163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.577 [2024-12-07 10:10:25.112177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.577 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.122148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.122204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.122219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.122229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.577 [2024-12-07 10:10:25.122235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.577 [2024-12-07 10:10:25.122249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.577 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.132125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.132192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.132206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.132213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.577 [2024-12-07 10:10:25.132219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.577 [2024-12-07 10:10:25.132233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.577 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.142156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.142219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.142241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.142248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.577 [2024-12-07 10:10:25.142254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.577 [2024-12-07 10:10:25.142268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.577 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.152230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.152345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.152362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.152368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.577 [2024-12-07 10:10:25.152374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.577 [2024-12-07 10:10:25.152389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.577 qpair failed and we were unable to recover it. 00:35:56.577 [2024-12-07 10:10:25.162193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.577 [2024-12-07 10:10:25.162249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.577 [2024-12-07 10:10:25.162264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.577 [2024-12-07 10:10:25.162274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.162280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.162295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.172224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.172284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.172306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.172312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.172317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.172332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.182306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.182409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.182423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.182430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.182436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.182449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.192220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.192277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.192291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.192301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.192307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.192321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.202279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.202345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.202359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.202366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.202371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.202386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.212349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.212407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.212430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.212437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.212443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.212457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.222404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.222508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.222523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.222530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.222536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.222551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.232403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.232479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.232493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.232500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.232506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.232520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.242419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.242520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.242534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.242540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.242546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.242560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.252444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.252504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.252523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.252533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.252539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.252554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.262487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.262545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.262560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.262570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.262576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.262590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.272510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.272568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.272583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.272593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.272599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.272613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.282534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.282597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.282614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.282621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.282627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.282641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.578 [2024-12-07 10:10:25.292565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.578 [2024-12-07 10:10:25.292625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.578 [2024-12-07 10:10:25.292645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.578 [2024-12-07 10:10:25.292651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.578 [2024-12-07 10:10:25.292657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.578 [2024-12-07 10:10:25.292672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.578 qpair failed and we were unable to recover it. 00:35:56.836 [2024-12-07 10:10:25.302602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.836 [2024-12-07 10:10:25.302663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.836 [2024-12-07 10:10:25.302685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.836 [2024-12-07 10:10:25.302692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.836 [2024-12-07 10:10:25.302698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.836 [2024-12-07 10:10:25.302713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.836 qpair failed and we were unable to recover it. 00:35:56.836 [2024-12-07 10:10:25.312625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.836 [2024-12-07 10:10:25.312688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.836 [2024-12-07 10:10:25.312707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.836 [2024-12-07 10:10:25.312714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.312720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.312735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.322697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.322763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.322778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.322785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.322791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.322805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.332694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.332773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.332789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.332797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.332803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.332817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.342752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.342830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.342844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.342854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.342860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.342874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.352747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.352807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.352829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.352838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.352845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.352860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.362755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.362862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.362881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.362890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.362896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.362911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.372798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.372858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.372880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.372887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.372893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.372907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.382849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.382960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.382975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.382982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.382988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.383003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.392846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.392906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.392930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.392937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.392943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.392963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.402901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.402992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.403007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.403013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.403019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.403034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.412988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.413053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.413075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.413081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.413087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.413101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.422965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.423028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.423049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.423056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.423062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.423077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.433001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.433063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.433084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.433094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.433100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.433114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.837 [2024-12-07 10:10:25.442946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.837 [2024-12-07 10:10:25.443012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.837 [2024-12-07 10:10:25.443032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.837 [2024-12-07 10:10:25.443038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.837 [2024-12-07 10:10:25.443044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.837 [2024-12-07 10:10:25.443058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.837 qpair failed and we were unable to recover it. 00:35:56.838 [2024-12-07 10:10:25.452955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.838 [2024-12-07 10:10:25.453009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.838 [2024-12-07 10:10:25.453024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.838 [2024-12-07 10:10:25.453030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.838 [2024-12-07 10:10:25.453044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.838 [2024-12-07 10:10:25.453059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.838 qpair failed and we were unable to recover it. 00:35:56.838 [2024-12-07 10:10:25.463039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.838 [2024-12-07 10:10:25.463101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.838 [2024-12-07 10:10:25.463123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.838 [2024-12-07 10:10:25.463130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.838 [2024-12-07 10:10:25.463135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.838 [2024-12-07 10:10:25.463150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.838 qpair failed and we were unable to recover it. 00:35:56.838 [2024-12-07 10:10:25.473025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.838 [2024-12-07 10:10:25.473085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.838 [2024-12-07 10:10:25.473107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.838 [2024-12-07 10:10:25.473113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.838 [2024-12-07 10:10:25.473119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.838 [2024-12-07 10:10:25.473133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.838 qpair failed and we were unable to recover it. 00:35:56.838 [2024-12-07 10:10:25.483152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.838 [2024-12-07 10:10:25.483212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.838 [2024-12-07 10:10:25.483232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.838 [2024-12-07 10:10:25.483239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.838 [2024-12-07 10:10:25.483244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.838 [2024-12-07 10:10:25.483259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.838 qpair failed and we were unable to recover it. 00:35:56.838 [2024-12-07 10:10:25.493137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.838 [2024-12-07 10:10:25.493192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.838 [2024-12-07 10:10:25.493206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.838 [2024-12-07 10:10:25.493216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.838 [2024-12-07 10:10:25.493222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.838 [2024-12-07 10:10:25.493236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.838 qpair failed and we were unable to recover it. 00:35:56.838 [2024-12-07 10:10:25.503241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.838 [2024-12-07 10:10:25.503303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.838 [2024-12-07 10:10:25.503325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.838 [2024-12-07 10:10:25.503332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.838 [2024-12-07 10:10:25.503337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.838 [2024-12-07 10:10:25.503351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.838 qpair failed and we were unable to recover it. 00:35:56.838 [2024-12-07 10:10:25.513199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.838 [2024-12-07 10:10:25.513261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.838 [2024-12-07 10:10:25.513283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.838 [2024-12-07 10:10:25.513289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.838 [2024-12-07 10:10:25.513296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.838 [2024-12-07 10:10:25.513310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.838 qpair failed and we were unable to recover it. 00:35:56.838 [2024-12-07 10:10:25.523223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.838 [2024-12-07 10:10:25.523313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.838 [2024-12-07 10:10:25.523331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.838 [2024-12-07 10:10:25.523338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.838 [2024-12-07 10:10:25.523343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.838 [2024-12-07 10:10:25.523358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.838 qpair failed and we were unable to recover it. 00:35:56.838 [2024-12-07 10:10:25.533177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.838 [2024-12-07 10:10:25.533233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.838 [2024-12-07 10:10:25.533256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.838 [2024-12-07 10:10:25.533262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.838 [2024-12-07 10:10:25.533268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.838 [2024-12-07 10:10:25.533283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.838 qpair failed and we were unable to recover it. 00:35:56.838 [2024-12-07 10:10:25.543356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.838 [2024-12-07 10:10:25.543418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.838 [2024-12-07 10:10:25.543441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.838 [2024-12-07 10:10:25.543447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.838 [2024-12-07 10:10:25.543453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.838 [2024-12-07 10:10:25.543467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.838 qpair failed and we were unable to recover it. 00:35:56.838 [2024-12-07 10:10:25.553323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.838 [2024-12-07 10:10:25.553387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.838 [2024-12-07 10:10:25.553406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.838 [2024-12-07 10:10:25.553413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.838 [2024-12-07 10:10:25.553418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:56.838 [2024-12-07 10:10:25.553432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.838 qpair failed and we were unable to recover it. 00:35:57.097 [2024-12-07 10:10:25.563358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.097 [2024-12-07 10:10:25.563431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.097 [2024-12-07 10:10:25.563448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.097 [2024-12-07 10:10:25.563455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.097 [2024-12-07 10:10:25.563461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.097 [2024-12-07 10:10:25.563476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.097 qpair failed and we were unable to recover it. 00:35:57.097 [2024-12-07 10:10:25.573301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.097 [2024-12-07 10:10:25.573362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.097 [2024-12-07 10:10:25.573381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.097 [2024-12-07 10:10:25.573388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.097 [2024-12-07 10:10:25.573394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.097 [2024-12-07 10:10:25.573409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.097 qpair failed and we were unable to recover it. 00:35:57.097 [2024-12-07 10:10:25.583404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.097 [2024-12-07 10:10:25.583462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.097 [2024-12-07 10:10:25.583477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.097 [2024-12-07 10:10:25.583487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.097 [2024-12-07 10:10:25.583494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.097 [2024-12-07 10:10:25.583509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.097 qpair failed and we were unable to recover it. 00:35:57.097 [2024-12-07 10:10:25.593388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.097 [2024-12-07 10:10:25.593461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.097 [2024-12-07 10:10:25.593476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.097 [2024-12-07 10:10:25.593483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.097 [2024-12-07 10:10:25.593489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.097 [2024-12-07 10:10:25.593503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.097 qpair failed and we were unable to recover it. 00:35:57.097 [2024-12-07 10:10:25.603497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.097 [2024-12-07 10:10:25.603555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.097 [2024-12-07 10:10:25.603577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.097 [2024-12-07 10:10:25.603585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.097 [2024-12-07 10:10:25.603591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.097 [2024-12-07 10:10:25.603606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.097 qpair failed and we were unable to recover it. 00:35:57.097 [2024-12-07 10:10:25.613486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.097 [2024-12-07 10:10:25.613552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.097 [2024-12-07 10:10:25.613571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.097 [2024-12-07 10:10:25.613577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.097 [2024-12-07 10:10:25.613583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.097 [2024-12-07 10:10:25.613598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.097 qpair failed and we were unable to recover it. 00:35:57.097 [2024-12-07 10:10:25.623513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.097 [2024-12-07 10:10:25.623571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.097 [2024-12-07 10:10:25.623586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.097 [2024-12-07 10:10:25.623596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.097 [2024-12-07 10:10:25.623602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.097 [2024-12-07 10:10:25.623617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.097 qpair failed and we were unable to recover it. 00:35:57.097 [2024-12-07 10:10:25.633518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.097 [2024-12-07 10:10:25.633578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.097 [2024-12-07 10:10:25.633601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.097 [2024-12-07 10:10:25.633607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.097 [2024-12-07 10:10:25.633613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.097 [2024-12-07 10:10:25.633628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.097 qpair failed and we were unable to recover it. 00:35:57.097 [2024-12-07 10:10:25.643561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.097 [2024-12-07 10:10:25.643620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.097 [2024-12-07 10:10:25.643641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.097 [2024-12-07 10:10:25.643648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.097 [2024-12-07 10:10:25.643654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.097 [2024-12-07 10:10:25.643668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.097 qpair failed and we were unable to recover it. 00:35:57.097 [2024-12-07 10:10:25.653585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.097 [2024-12-07 10:10:25.653645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.097 [2024-12-07 10:10:25.653664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.097 [2024-12-07 10:10:25.653671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.097 [2024-12-07 10:10:25.653677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.097 [2024-12-07 10:10:25.653695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.097 qpair failed and we were unable to recover it. 00:35:57.097 [2024-12-07 10:10:25.663666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.097 [2024-12-07 10:10:25.663772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.097 [2024-12-07 10:10:25.663786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.097 [2024-12-07 10:10:25.663793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.097 [2024-12-07 10:10:25.663799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.097 [2024-12-07 10:10:25.663813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.097 qpair failed and we were unable to recover it. 00:35:57.097 [2024-12-07 10:10:25.673651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.097 [2024-12-07 10:10:25.673710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.097 [2024-12-07 10:10:25.673724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.097 [2024-12-07 10:10:25.673734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.673739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.673754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.683678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.683750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.683764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.683771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.683777] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.683791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.693692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.693755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.693772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.693779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.693785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.693799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.703737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.703797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.703822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.703828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.703834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.703848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.713786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.713850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.713868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.713876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.713881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.713896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.723782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.723840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.723862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.723869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.723875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.723890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.733856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.733965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.733980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.733987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.733993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.734008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.743852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.743912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.743935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.743942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.743951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.743970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.753937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.754002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.754023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.754030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.754036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.754050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.763955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.764057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.764071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.764078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.764084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.764099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.773991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.774094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.774107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.774114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.774121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.774135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.783972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.784083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.784101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.784107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.784113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.784127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.793927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.793989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.794008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.794015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.794021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.794036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.803951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.804012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.804031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.804037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.804043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.804058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.098 [2024-12-07 10:10:25.814064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.098 [2024-12-07 10:10:25.814128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.098 [2024-12-07 10:10:25.814145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.098 [2024-12-07 10:10:25.814152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.098 [2024-12-07 10:10:25.814158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.098 [2024-12-07 10:10:25.814172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.098 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.824113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.824177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.824198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.824209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.824216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.824232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.834161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.834274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.834290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.834298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.834305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.834324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.844117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.844181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.844199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.844205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.844211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.844226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.854160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.854214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.854229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.854240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.854246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.854260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.864199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.864262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.864283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.864290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.864296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.864310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.874217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.874275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.874290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.874301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.874307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.874322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.884263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.884373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.884394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.884401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.884407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.884421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.894272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.894329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.894343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.894352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.894358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.894373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.904301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.904364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.904385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.904392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.904398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.904412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.914256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.914314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.914338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.914345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.914351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.914365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.924385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.924466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.924480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.924487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.924496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.924511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.934376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.934441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.934459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.934466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.934472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.934485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.944354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.944415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.944439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.944446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.944452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.944467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.954447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.954542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.954556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.954563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.954569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.954584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.964418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.964478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.964498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.964505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.964511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.964526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.974497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.974554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.974579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.974585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.974591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.974606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.984534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.984592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.984606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.984618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.984623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.984638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:25.994575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:25.994638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:25.994661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:25.994668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:25.994674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:25.994688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:26.004619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:26.004728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:26.004745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:26.004752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:26.004758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:26.004772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:26.014567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:26.014629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:26.014649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:26.014655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:26.014664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:26.014679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:26.024587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:26.024651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:26.024672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:26.024679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.357 [2024-12-07 10:10:26.024684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.357 [2024-12-07 10:10:26.024699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.357 qpair failed and we were unable to recover it. 00:35:57.357 [2024-12-07 10:10:26.034740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.357 [2024-12-07 10:10:26.034849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.357 [2024-12-07 10:10:26.034867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.357 [2024-12-07 10:10:26.034874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.358 [2024-12-07 10:10:26.034880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.358 [2024-12-07 10:10:26.034895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.358 qpair failed and we were unable to recover it. 00:35:57.358 [2024-12-07 10:10:26.044648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.358 [2024-12-07 10:10:26.044714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.358 [2024-12-07 10:10:26.044732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.358 [2024-12-07 10:10:26.044739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.358 [2024-12-07 10:10:26.044745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.358 [2024-12-07 10:10:26.044759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.358 qpair failed and we were unable to recover it. 00:35:57.358 [2024-12-07 10:10:26.054700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.358 [2024-12-07 10:10:26.054756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.358 [2024-12-07 10:10:26.054770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.358 [2024-12-07 10:10:26.054781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.358 [2024-12-07 10:10:26.054786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.358 [2024-12-07 10:10:26.054801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.358 qpair failed and we were unable to recover it. 00:35:57.358 [2024-12-07 10:10:26.064705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.358 [2024-12-07 10:10:26.064767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.358 [2024-12-07 10:10:26.064791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.358 [2024-12-07 10:10:26.064798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.358 [2024-12-07 10:10:26.064804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.358 [2024-12-07 10:10:26.064818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.358 qpair failed and we were unable to recover it. 00:35:57.358 [2024-12-07 10:10:26.074725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.358 [2024-12-07 10:10:26.074785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.358 [2024-12-07 10:10:26.074807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.358 [2024-12-07 10:10:26.074814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.358 [2024-12-07 10:10:26.074820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.358 [2024-12-07 10:10:26.074834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.358 qpair failed and we were unable to recover it. 00:35:57.616 [2024-12-07 10:10:26.084874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.616 [2024-12-07 10:10:26.084936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.616 [2024-12-07 10:10:26.084967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.616 [2024-12-07 10:10:26.084975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.616 [2024-12-07 10:10:26.084981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.616 [2024-12-07 10:10:26.084998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.616 qpair failed and we were unable to recover it. 00:35:57.616 [2024-12-07 10:10:26.094777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.616 [2024-12-07 10:10:26.094844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.616 [2024-12-07 10:10:26.094860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.616 [2024-12-07 10:10:26.094866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.616 [2024-12-07 10:10:26.094872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.094887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.104916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.104989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.105007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.105014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.105023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.105039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.114964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.115067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.115082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.115088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.115094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.115109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.124889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.124987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.125002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.125009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.125015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.125030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.134987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.135050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.135072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.135079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.135085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.135099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.145042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.145133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.145148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.145155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.145161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.145175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.154967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.155054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.155070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.155077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.155083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.155098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.164991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.165049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.165065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.165074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.165080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.165095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.175082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.175143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.175157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.175164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.175170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.175184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.185121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.185186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.185201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.185212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.185218] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.185232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.195152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.195213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.195227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.195239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.195248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.195262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.205171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.205232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.205255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.205262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.205268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.205282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.215127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.215218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.215233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.215240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.215246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.215260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.225263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.225374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.225390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.225397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.225403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.225420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.235258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.235321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.235336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.235346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.235352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.235366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.245212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.245277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.245297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.245303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.245309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.245323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.255303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.255362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.255385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.255391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.255397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.255411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.265343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.265405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.265419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.265430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.265436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.265451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.275367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.275432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.275454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.275460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.275466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.275480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.285443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.285501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.285516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.285528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.285534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.285549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.295431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.295491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.295506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.295513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.295520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.295535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.305469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.305530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.305545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.305559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.305565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.305579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.315445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.315509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.315530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.315537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.315543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.315557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.325493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.325554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.325569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.325575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.617 [2024-12-07 10:10:26.325582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.617 [2024-12-07 10:10:26.325597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.617 qpair failed and we were unable to recover it. 00:35:57.617 [2024-12-07 10:10:26.335470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.617 [2024-12-07 10:10:26.335528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.617 [2024-12-07 10:10:26.335553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.617 [2024-12-07 10:10:26.335560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.618 [2024-12-07 10:10:26.335566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.618 [2024-12-07 10:10:26.335581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.618 qpair failed and we were unable to recover it. 00:35:57.876 [2024-12-07 10:10:26.345495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.876 [2024-12-07 10:10:26.345558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.876 [2024-12-07 10:10:26.345580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.876 [2024-12-07 10:10:26.345588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.876 [2024-12-07 10:10:26.345594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.876 [2024-12-07 10:10:26.345610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.876 qpair failed and we were unable to recover it. 00:35:57.876 [2024-12-07 10:10:26.355516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.876 [2024-12-07 10:10:26.355574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.876 [2024-12-07 10:10:26.355589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.876 [2024-12-07 10:10:26.355603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.876 [2024-12-07 10:10:26.355609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.876 [2024-12-07 10:10:26.355624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.876 qpair failed and we were unable to recover it. 00:35:57.876 [2024-12-07 10:10:26.365546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.876 [2024-12-07 10:10:26.365611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.876 [2024-12-07 10:10:26.365632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.876 [2024-12-07 10:10:26.365638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.876 [2024-12-07 10:10:26.365645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.876 [2024-12-07 10:10:26.365659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.876 qpair failed and we were unable to recover it. 00:35:57.876 [2024-12-07 10:10:26.375645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.375702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.375717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.375734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.375740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.375755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.385616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.385674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.385689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.385702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.385709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.385724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.395679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.395740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.395754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.395765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.395770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.395785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.405656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.405717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.405740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.405746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.405752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.405767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.415742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.415832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.415848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.415854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.415860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.415875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.425802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.425863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.425877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.425889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.425895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.425909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.435811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.435871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.435884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.435898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.435904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.435918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.445869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.445968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.445983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.445990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.445996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.446011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.455874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.455937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.455965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.455971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.455978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.455992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.465916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.466013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.466029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.466040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.466047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.466062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.475956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.476061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.476077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.476084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.476090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.476106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.486009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.486111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.486126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.486133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.486140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.486155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.495955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.496053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.496068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.496075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.877 [2024-12-07 10:10:26.496081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.877 [2024-12-07 10:10:26.496096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.877 qpair failed and we were unable to recover it. 00:35:57.877 [2024-12-07 10:10:26.506014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.877 [2024-12-07 10:10:26.506077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.877 [2024-12-07 10:10:26.506092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.877 [2024-12-07 10:10:26.506102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.878 [2024-12-07 10:10:26.506108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.878 [2024-12-07 10:10:26.506123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.878 qpair failed and we were unable to recover it. 00:35:57.878 [2024-12-07 10:10:26.516009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.878 [2024-12-07 10:10:26.516110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.878 [2024-12-07 10:10:26.516125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.878 [2024-12-07 10:10:26.516132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.878 [2024-12-07 10:10:26.516138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.878 [2024-12-07 10:10:26.516152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.878 qpair failed and we were unable to recover it. 00:35:57.878 [2024-12-07 10:10:26.526108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.878 [2024-12-07 10:10:26.526167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.878 [2024-12-07 10:10:26.526181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.878 [2024-12-07 10:10:26.526194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.878 [2024-12-07 10:10:26.526200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.878 [2024-12-07 10:10:26.526214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.878 qpair failed and we were unable to recover it. 00:35:57.878 [2024-12-07 10:10:26.536116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.878 [2024-12-07 10:10:26.536209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.878 [2024-12-07 10:10:26.536224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.878 [2024-12-07 10:10:26.536231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.878 [2024-12-07 10:10:26.536237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.878 [2024-12-07 10:10:26.536251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.878 qpair failed and we were unable to recover it. 00:35:57.878 [2024-12-07 10:10:26.546155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.878 [2024-12-07 10:10:26.546214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.878 [2024-12-07 10:10:26.546229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.878 [2024-12-07 10:10:26.546236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.878 [2024-12-07 10:10:26.546241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.878 [2024-12-07 10:10:26.546256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.878 qpair failed and we were unable to recover it. 00:35:57.878 [2024-12-07 10:10:26.556180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.878 [2024-12-07 10:10:26.556240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.878 [2024-12-07 10:10:26.556255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.878 [2024-12-07 10:10:26.556268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.878 [2024-12-07 10:10:26.556274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.878 [2024-12-07 10:10:26.556289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.878 qpair failed and we were unable to recover it. 00:35:57.878 [2024-12-07 10:10:26.566208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.878 [2024-12-07 10:10:26.566266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.878 [2024-12-07 10:10:26.566280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.878 [2024-12-07 10:10:26.566290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.878 [2024-12-07 10:10:26.566296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.878 [2024-12-07 10:10:26.566310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.878 qpair failed and we were unable to recover it. 00:35:57.878 [2024-12-07 10:10:26.576231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.878 [2024-12-07 10:10:26.576291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.878 [2024-12-07 10:10:26.576312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.878 [2024-12-07 10:10:26.576319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.878 [2024-12-07 10:10:26.576324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.878 [2024-12-07 10:10:26.576339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.878 qpair failed and we were unable to recover it. 00:35:57.878 [2024-12-07 10:10:26.586292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.878 [2024-12-07 10:10:26.586355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.878 [2024-12-07 10:10:26.586369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.878 [2024-12-07 10:10:26.586379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.878 [2024-12-07 10:10:26.586385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.878 [2024-12-07 10:10:26.586399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.878 qpair failed and we were unable to recover it. 00:35:57.878 [2024-12-07 10:10:26.596343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.878 [2024-12-07 10:10:26.596404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.878 [2024-12-07 10:10:26.596421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.878 [2024-12-07 10:10:26.596427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.878 [2024-12-07 10:10:26.596434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:57.878 [2024-12-07 10:10:26.596456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.878 qpair failed and we were unable to recover it. 00:35:58.136 [2024-12-07 10:10:26.606356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.136 [2024-12-07 10:10:26.606455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.136 [2024-12-07 10:10:26.606474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.136 [2024-12-07 10:10:26.606482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.136 [2024-12-07 10:10:26.606488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.136 [2024-12-07 10:10:26.606505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.136 qpair failed and we were unable to recover it. 00:35:58.136 [2024-12-07 10:10:26.616317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.136 [2024-12-07 10:10:26.616410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.136 [2024-12-07 10:10:26.616426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.136 [2024-12-07 10:10:26.616433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.136 [2024-12-07 10:10:26.616439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.136 [2024-12-07 10:10:26.616455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.136 qpair failed and we were unable to recover it. 00:35:58.136 [2024-12-07 10:10:26.626441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.136 [2024-12-07 10:10:26.626539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.136 [2024-12-07 10:10:26.626555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.626561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.626568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.626583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.636397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.636458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.636479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.636486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.636492] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.636508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.646446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.646503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.646530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.646537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.646543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.646558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.656456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.656514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.656528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.656539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.656545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.656560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.666506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.666588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.666603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.666609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.666618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.666633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.676567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.676633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.676650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.676657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.676663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.676677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.686609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.686668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.686690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.686697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.686703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.686717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.696577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.696673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.696688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.696695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.696702] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.696716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.706627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.706696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.706712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.706718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.706724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.706738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.716633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.716696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.716715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.716722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.716728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.716742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.726637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.726703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.726720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.726727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.726733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.726748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.736712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.736777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.736797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.736804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.736810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.736826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.746751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.746820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.746835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.746842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.746848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.746863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.756760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.756822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.756846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.756852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.756859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.756874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.766791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.766850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.766872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.766879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.766885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.766900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.776840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.776902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.776925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.776932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.776938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.776957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.786867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.786941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.786958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.786965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.786971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.786985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.796875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.796935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.796953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.796960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.796966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.796981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.806922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.807031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.807047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.807054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.807060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.807075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.816966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.817025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.817040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.817046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.817052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.817067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.826969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.827032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.827050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.827056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.827062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.827077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.836993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.837057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.837071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.837078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.837084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.837100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.847005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.847080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.847094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.847101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.847108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.847122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.137 [2024-12-07 10:10:26.857050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.137 [2024-12-07 10:10:26.857105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.137 [2024-12-07 10:10:26.857123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.137 [2024-12-07 10:10:26.857130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.137 [2024-12-07 10:10:26.857136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.137 [2024-12-07 10:10:26.857152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.137 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.867077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.867140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.867157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.867164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.867170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.867189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.877115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.877191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.877210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.877216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.877223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.877238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.887106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.887210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.887225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.887232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.887238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.887253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.897157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.897215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.897230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.897236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.897242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.897257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.907243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.907306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.907320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.907326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.907332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.907347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.917212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.917275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.917293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.917300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.917306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.917320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.927258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.927318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.927333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.927339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.927345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.927359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.937253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.937332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.937347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.937354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.937360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.937374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.947232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.947296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.947310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.947317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.947323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.947337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.957323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.957394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.957409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.957416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.957422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.957444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.967371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.967432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.967447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.967453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.967459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.967474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.977374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.977432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.977446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.977453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.977459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.977473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.987457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.987518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.987532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.987539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.987545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.987559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:26.997438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:26.997506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:26.997522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:26.997529] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:26.997534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:26.997549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:27.007489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:27.007555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:27.007573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:27.007579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:27.007585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:27.007600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.396 [2024-12-07 10:10:27.017541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.396 [2024-12-07 10:10:27.017602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.396 [2024-12-07 10:10:27.017615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.396 [2024-12-07 10:10:27.017622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.396 [2024-12-07 10:10:27.017628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.396 [2024-12-07 10:10:27.017642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.396 qpair failed and we were unable to recover it. 00:35:58.397 [2024-12-07 10:10:27.027536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.397 [2024-12-07 10:10:27.027599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.397 [2024-12-07 10:10:27.027613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.397 [2024-12-07 10:10:27.027620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.397 [2024-12-07 10:10:27.027625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.397 [2024-12-07 10:10:27.027640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.397 qpair failed and we were unable to recover it. 00:35:58.397 [2024-12-07 10:10:27.037554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.397 [2024-12-07 10:10:27.037620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.397 [2024-12-07 10:10:27.037634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.397 [2024-12-07 10:10:27.037641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.397 [2024-12-07 10:10:27.037647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.397 [2024-12-07 10:10:27.037662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.397 qpair failed and we were unable to recover it. 00:35:58.397 [2024-12-07 10:10:27.047557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.397 [2024-12-07 10:10:27.047610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.397 [2024-12-07 10:10:27.047624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.397 [2024-12-07 10:10:27.047631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.397 [2024-12-07 10:10:27.047637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.397 [2024-12-07 10:10:27.047655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.397 qpair failed and we were unable to recover it. 00:35:58.397 [2024-12-07 10:10:27.057635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.397 [2024-12-07 10:10:27.057696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.397 [2024-12-07 10:10:27.057710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.397 [2024-12-07 10:10:27.057717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.397 [2024-12-07 10:10:27.057723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.397 [2024-12-07 10:10:27.057737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.397 qpair failed and we were unable to recover it. 00:35:58.397 [2024-12-07 10:10:27.067651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.397 [2024-12-07 10:10:27.067714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.397 [2024-12-07 10:10:27.067729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.397 [2024-12-07 10:10:27.067735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.397 [2024-12-07 10:10:27.067742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.397 [2024-12-07 10:10:27.067757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.397 qpair failed and we were unable to recover it. 00:35:58.397 [2024-12-07 10:10:27.077685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.397 [2024-12-07 10:10:27.077758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.397 [2024-12-07 10:10:27.077775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.397 [2024-12-07 10:10:27.077782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.397 [2024-12-07 10:10:27.077788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.397 [2024-12-07 10:10:27.077802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.397 qpair failed and we were unable to recover it. 00:35:58.397 [2024-12-07 10:10:27.087702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.397 [2024-12-07 10:10:27.087779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.397 [2024-12-07 10:10:27.087794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.397 [2024-12-07 10:10:27.087801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.397 [2024-12-07 10:10:27.087807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.397 [2024-12-07 10:10:27.087822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.397 qpair failed and we were unable to recover it. 00:35:58.397 [2024-12-07 10:10:27.097704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.397 [2024-12-07 10:10:27.097766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.397 [2024-12-07 10:10:27.097783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.397 [2024-12-07 10:10:27.097790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.397 [2024-12-07 10:10:27.097796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.397 [2024-12-07 10:10:27.097811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.397 qpair failed and we were unable to recover it. 00:35:58.397 [2024-12-07 10:10:27.107747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.397 [2024-12-07 10:10:27.107808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.397 [2024-12-07 10:10:27.107823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.397 [2024-12-07 10:10:27.107829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.397 [2024-12-07 10:10:27.107835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.397 [2024-12-07 10:10:27.107849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.397 qpair failed and we were unable to recover it. 00:35:58.397 [2024-12-07 10:10:27.117764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.397 [2024-12-07 10:10:27.117835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.397 [2024-12-07 10:10:27.117854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.397 [2024-12-07 10:10:27.117861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.397 [2024-12-07 10:10:27.117867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.397 [2024-12-07 10:10:27.117883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.397 qpair failed and we were unable to recover it. 00:35:58.655 [2024-12-07 10:10:27.127820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.127880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.127897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.127904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.127910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.127926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.137821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.137880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.137895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.137902] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.137911] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.137926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.147858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.147919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.147934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.147941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.147956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.147973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.157859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.157918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.157934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.157940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.157950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.157966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.167918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.168020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.168036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.168042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.168049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.168064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.177960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.178018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.178032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.178038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.178044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.178059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.187984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.188048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.188066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.188072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.188079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.188094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.197935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.197998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.198013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.198020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.198027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.198041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.208028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.208090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.208104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.208111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.208117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.208131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.218034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.218091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.218106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.218112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.218118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.218133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.228128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.228231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.228247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.228253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.228263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.228278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.238133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.238193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.238209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.238215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.238222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.238237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.248108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.248168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.248183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.248190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.248196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.248211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.258192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.258252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.258267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.258274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.258279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.258294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.268217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.268280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.268294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.268301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.268307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.268321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.278262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.278374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.278388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.278395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.278401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.278416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.288271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.288328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.288342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.288348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.288354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.288368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.298298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.298356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.298370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.298376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.298382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.298397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.308362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.308472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.308486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.308493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.308499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.308514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.318429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.318535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.318549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.318556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.318566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.318580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.328431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.328541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.328555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.328562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.328568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.328582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.338411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.338472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.338486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.338493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.338499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.338514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.348456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.348518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.348532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.348538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.656 [2024-12-07 10:10:27.348544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.656 [2024-12-07 10:10:27.348558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.656 qpair failed and we were unable to recover it. 00:35:58.656 [2024-12-07 10:10:27.358534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.656 [2024-12-07 10:10:27.358643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.656 [2024-12-07 10:10:27.358658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.656 [2024-12-07 10:10:27.358664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.657 [2024-12-07 10:10:27.358670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.657 [2024-12-07 10:10:27.358685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.657 qpair failed and we were unable to recover it. 00:35:58.657 [2024-12-07 10:10:27.368502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.657 [2024-12-07 10:10:27.368564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.657 [2024-12-07 10:10:27.368579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.657 [2024-12-07 10:10:27.368586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.657 [2024-12-07 10:10:27.368592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.657 [2024-12-07 10:10:27.368605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.657 qpair failed and we were unable to recover it. 00:35:58.915 [2024-12-07 10:10:27.378535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.915 [2024-12-07 10:10:27.378594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.915 [2024-12-07 10:10:27.378612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.915 [2024-12-07 10:10:27.378619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.915 [2024-12-07 10:10:27.378625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.915 [2024-12-07 10:10:27.378641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.915 qpair failed and we were unable to recover it. 00:35:58.915 [2024-12-07 10:10:27.388573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.915 [2024-12-07 10:10:27.388635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.915 [2024-12-07 10:10:27.388651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.915 [2024-12-07 10:10:27.388658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.915 [2024-12-07 10:10:27.388664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.915 [2024-12-07 10:10:27.388680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.915 qpair failed and we were unable to recover it. 00:35:58.915 [2024-12-07 10:10:27.398586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.915 [2024-12-07 10:10:27.398647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.915 [2024-12-07 10:10:27.398662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.915 [2024-12-07 10:10:27.398668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.915 [2024-12-07 10:10:27.398675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.915 [2024-12-07 10:10:27.398689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.915 qpair failed and we were unable to recover it. 00:35:58.915 [2024-12-07 10:10:27.408638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.915 [2024-12-07 10:10:27.408750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.915 [2024-12-07 10:10:27.408765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.915 [2024-12-07 10:10:27.408772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.915 [2024-12-07 10:10:27.408782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.915 [2024-12-07 10:10:27.408796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.915 qpair failed and we were unable to recover it. 00:35:58.915 [2024-12-07 10:10:27.418659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.915 [2024-12-07 10:10:27.418719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.915 [2024-12-07 10:10:27.418735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.915 [2024-12-07 10:10:27.418741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.915 [2024-12-07 10:10:27.418747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.915 [2024-12-07 10:10:27.418761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.915 qpair failed and we were unable to recover it. 00:35:58.915 [2024-12-07 10:10:27.428698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.915 [2024-12-07 10:10:27.428762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.915 [2024-12-07 10:10:27.428776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.915 [2024-12-07 10:10:27.428783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.915 [2024-12-07 10:10:27.428789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.915 [2024-12-07 10:10:27.428803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.915 qpair failed and we were unable to recover it. 00:35:58.915 [2024-12-07 10:10:27.438713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.915 [2024-12-07 10:10:27.438782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.915 [2024-12-07 10:10:27.438796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.438803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.438809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.438824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.448726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.448788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.448803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.448810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.448817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.448832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.458817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.458930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.458945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.458956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.458962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.458977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.468795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.468859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.468874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.468880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.468887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.468902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.478813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.478870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.478885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.478892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.478897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.478913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.488811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.488869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.488885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.488892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.488899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.488914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.498811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.498870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.498885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.498897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.498904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.498918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.508912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.508972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.508986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.508993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.508999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.509013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.518941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.519004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.519018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.519025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.519031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.519046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.528968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.529031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.529046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.529053] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.529059] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.529074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.539012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.539070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.539085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.539092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.539098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.539113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.548966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.549025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.549040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.549047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.549053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.549068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.559051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.559115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.559130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.559137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.559144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.559160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.569126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.569184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.569198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.569205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.569210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.569225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.579115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.579172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.579186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.579193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.579199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.579213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.589149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.589210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.589225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.589234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.589241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.589255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.599108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.599172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.599186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.599193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.599200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.599214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.609203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.609269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.609285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.609293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.609299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.609315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.619174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.619235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.619249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.619256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.619262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.619277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:58.916 [2024-12-07 10:10:27.629278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.916 [2024-12-07 10:10:27.629339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.916 [2024-12-07 10:10:27.629354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.916 [2024-12-07 10:10:27.629360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.916 [2024-12-07 10:10:27.629366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:58.916 [2024-12-07 10:10:27.629381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.916 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-07 10:10:27.639240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.175 [2024-12-07 10:10:27.639303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.175 [2024-12-07 10:10:27.639321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.175 [2024-12-07 10:10:27.639328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.175 [2024-12-07 10:10:27.639334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.175 [2024-12-07 10:10:27.639350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-07 10:10:27.649300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.175 [2024-12-07 10:10:27.649365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.175 [2024-12-07 10:10:27.649382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.175 [2024-12-07 10:10:27.649389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.175 [2024-12-07 10:10:27.649395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.175 [2024-12-07 10:10:27.649411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-07 10:10:27.659344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.175 [2024-12-07 10:10:27.659401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.175 [2024-12-07 10:10:27.659415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.175 [2024-12-07 10:10:27.659422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.175 [2024-12-07 10:10:27.659427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.175 [2024-12-07 10:10:27.659442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-07 10:10:27.669394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.175 [2024-12-07 10:10:27.669490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.175 [2024-12-07 10:10:27.669505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.175 [2024-12-07 10:10:27.669513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.175 [2024-12-07 10:10:27.669519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.175 [2024-12-07 10:10:27.669534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-07 10:10:27.679444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.175 [2024-12-07 10:10:27.679506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.175 [2024-12-07 10:10:27.679520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.175 [2024-12-07 10:10:27.679530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.175 [2024-12-07 10:10:27.679536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.175 [2024-12-07 10:10:27.679552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-07 10:10:27.689441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.175 [2024-12-07 10:10:27.689502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.175 [2024-12-07 10:10:27.689517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.175 [2024-12-07 10:10:27.689524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.175 [2024-12-07 10:10:27.689530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.175 [2024-12-07 10:10:27.689545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-07 10:10:27.699442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.175 [2024-12-07 10:10:27.699501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.175 [2024-12-07 10:10:27.699515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.175 [2024-12-07 10:10:27.699521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.175 [2024-12-07 10:10:27.699527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.175 [2024-12-07 10:10:27.699543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-07 10:10:27.709434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.175 [2024-12-07 10:10:27.709537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.175 [2024-12-07 10:10:27.709552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.175 [2024-12-07 10:10:27.709559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.175 [2024-12-07 10:10:27.709565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.175 [2024-12-07 10:10:27.709579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-07 10:10:27.719488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.175 [2024-12-07 10:10:27.719548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.175 [2024-12-07 10:10:27.719562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.175 [2024-12-07 10:10:27.719568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.175 [2024-12-07 10:10:27.719574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.175 [2024-12-07 10:10:27.719589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-07 10:10:27.729478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.175 [2024-12-07 10:10:27.729535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.175 [2024-12-07 10:10:27.729550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.175 [2024-12-07 10:10:27.729557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.175 [2024-12-07 10:10:27.729563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.175 [2024-12-07 10:10:27.729577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-07 10:10:27.739522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.175 [2024-12-07 10:10:27.739583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.175 [2024-12-07 10:10:27.739597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.175 [2024-12-07 10:10:27.739604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.175 [2024-12-07 10:10:27.739610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.175 [2024-12-07 10:10:27.739624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-07 10:10:27.749560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.175 [2024-12-07 10:10:27.749659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.175 [2024-12-07 10:10:27.749674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.749681] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.749688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.749703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.759570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.759630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.759646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.759653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.759660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.759675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.769668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.769748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.769764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.769774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.769780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.769796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.779682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.779749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.779763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.779769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.779775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.779790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.789722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.789786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.789801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.789807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.789813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.789828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.799743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.799799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.799813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.799820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.799826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.799840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.809743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.809806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.809821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.809827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.809833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.809849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.819832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.819896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.819911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.819917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.819923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.819938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.829854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.829918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.829931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.829938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.829944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.829965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.839888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.839952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.839967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.839974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.839980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.839995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.849887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.849974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.849990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.849996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.850002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.850017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.859910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.859972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.859989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.859996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.860002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.860017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.869899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.869968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.869982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.869989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.869995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.870009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-07 10:10:27.879985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.176 [2024-12-07 10:10:27.880058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.176 [2024-12-07 10:10:27.880073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.176 [2024-12-07 10:10:27.880080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.176 [2024-12-07 10:10:27.880087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.176 [2024-12-07 10:10:27.880102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.177 [2024-12-07 10:10:27.889942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.177 [2024-12-07 10:10:27.890009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.177 [2024-12-07 10:10:27.890025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.177 [2024-12-07 10:10:27.890032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.177 [2024-12-07 10:10:27.890038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.177 [2024-12-07 10:10:27.890053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.177 qpair failed and we were unable to recover it. 00:35:59.435 [2024-12-07 10:10:27.899967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.435 [2024-12-07 10:10:27.900030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.435 [2024-12-07 10:10:27.900047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.435 [2024-12-07 10:10:27.900054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.435 [2024-12-07 10:10:27.900060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.435 [2024-12-07 10:10:27.900076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.435 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:27.910098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:27.910161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:27.910177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:27.910184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:27.910190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:27.910206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:27.920097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:27.920159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:27.920174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:27.920180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:27.920187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:27.920201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:27.930132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:27.930190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:27.930204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:27.930211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:27.930217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:27.930232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:27.940146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:27.940204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:27.940219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:27.940225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:27.940231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:27.940245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:27.950192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:27.950253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:27.950271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:27.950278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:27.950285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:27.950300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:27.960206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:27.960261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:27.960276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:27.960283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:27.960289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:27.960304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:27.970265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:27.970327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:27.970342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:27.970348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:27.970354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:27.970369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:27.980231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:27.980291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:27.980306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:27.980313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:27.980319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:27.980334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:27.990288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:27.990348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:27.990363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:27.990370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:27.990376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:27.990391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:28.000236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:28.000298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:28.000312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:28.000319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:28.000325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:28.000339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:28.010269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:28.010327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:28.010342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:28.010349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:28.010355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:28.010370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:28.020466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:28.020530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:28.020544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:28.020551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:28.020556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:28.020571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:28.030381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:28.030441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.436 [2024-12-07 10:10:28.030455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.436 [2024-12-07 10:10:28.030462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.436 [2024-12-07 10:10:28.030468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.436 [2024-12-07 10:10:28.030483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.436 qpair failed and we were unable to recover it. 00:35:59.436 [2024-12-07 10:10:28.040411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.436 [2024-12-07 10:10:28.040473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.437 [2024-12-07 10:10:28.040490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.437 [2024-12-07 10:10:28.040497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.437 [2024-12-07 10:10:28.040503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.437 [2024-12-07 10:10:28.040517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.437 qpair failed and we were unable to recover it. 00:35:59.437 [2024-12-07 10:10:28.050499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.437 [2024-12-07 10:10:28.050562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.437 [2024-12-07 10:10:28.050576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.437 [2024-12-07 10:10:28.050583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.437 [2024-12-07 10:10:28.050589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.437 [2024-12-07 10:10:28.050605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.437 qpair failed and we were unable to recover it. 00:35:59.437 [2024-12-07 10:10:28.060484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.437 [2024-12-07 10:10:28.060544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.437 [2024-12-07 10:10:28.060558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.437 [2024-12-07 10:10:28.060564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.437 [2024-12-07 10:10:28.060571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.437 [2024-12-07 10:10:28.060585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.437 qpair failed and we were unable to recover it. 00:35:59.437 [2024-12-07 10:10:28.070548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.437 [2024-12-07 10:10:28.070613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.437 [2024-12-07 10:10:28.070627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.437 [2024-12-07 10:10:28.070634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.437 [2024-12-07 10:10:28.070640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.437 [2024-12-07 10:10:28.070654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.437 qpair failed and we were unable to recover it. 00:35:59.437 [2024-12-07 10:10:28.080546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.437 [2024-12-07 10:10:28.080607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.437 [2024-12-07 10:10:28.080621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.437 [2024-12-07 10:10:28.080628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.437 [2024-12-07 10:10:28.080634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.437 [2024-12-07 10:10:28.080652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.437 qpair failed and we were unable to recover it. 00:35:59.437 [2024-12-07 10:10:28.090567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.437 [2024-12-07 10:10:28.090630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.437 [2024-12-07 10:10:28.090646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.437 [2024-12-07 10:10:28.090652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.437 [2024-12-07 10:10:28.090658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.437 [2024-12-07 10:10:28.090672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.437 qpair failed and we were unable to recover it. 00:35:59.437 [2024-12-07 10:10:28.100597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.437 [2024-12-07 10:10:28.100656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.437 [2024-12-07 10:10:28.100670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.437 [2024-12-07 10:10:28.100677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.437 [2024-12-07 10:10:28.100683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.437 [2024-12-07 10:10:28.100697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.437 qpair failed and we were unable to recover it. 00:35:59.437 [2024-12-07 10:10:28.110635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.437 [2024-12-07 10:10:28.110697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.437 [2024-12-07 10:10:28.110711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.437 [2024-12-07 10:10:28.110718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.437 [2024-12-07 10:10:28.110724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.437 [2024-12-07 10:10:28.110738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.437 qpair failed and we were unable to recover it. 00:35:59.437 [2024-12-07 10:10:28.120659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.437 [2024-12-07 10:10:28.120719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.437 [2024-12-07 10:10:28.120734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.437 [2024-12-07 10:10:28.120740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.437 [2024-12-07 10:10:28.120746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.437 [2024-12-07 10:10:28.120761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.437 qpair failed and we were unable to recover it. 00:35:59.437 [2024-12-07 10:10:28.130720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.437 [2024-12-07 10:10:28.130818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.437 [2024-12-07 10:10:28.130837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.437 [2024-12-07 10:10:28.130844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.437 [2024-12-07 10:10:28.130850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.437 [2024-12-07 10:10:28.130864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.437 qpair failed and we were unable to recover it. 00:35:59.437 [2024-12-07 10:10:28.140713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.437 [2024-12-07 10:10:28.140769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.437 [2024-12-07 10:10:28.140783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.437 [2024-12-07 10:10:28.140789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.437 [2024-12-07 10:10:28.140796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.437 [2024-12-07 10:10:28.140810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.437 qpair failed and we were unable to recover it. 00:35:59.437 [2024-12-07 10:10:28.150802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.437 [2024-12-07 10:10:28.150863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.437 [2024-12-07 10:10:28.150877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.437 [2024-12-07 10:10:28.150883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.437 [2024-12-07 10:10:28.150889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.437 [2024-12-07 10:10:28.150904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.437 qpair failed and we were unable to recover it. 00:35:59.696 [2024-12-07 10:10:28.160839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.696 [2024-12-07 10:10:28.160936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.696 [2024-12-07 10:10:28.160959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.696 [2024-12-07 10:10:28.160967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.696 [2024-12-07 10:10:28.160973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.696 [2024-12-07 10:10:28.160989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.696 qpair failed and we were unable to recover it. 00:35:59.696 [2024-12-07 10:10:28.170795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.696 [2024-12-07 10:10:28.170856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.696 [2024-12-07 10:10:28.170873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.696 [2024-12-07 10:10:28.170880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.696 [2024-12-07 10:10:28.170886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.696 [2024-12-07 10:10:28.170912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.696 qpair failed and we were unable to recover it. 00:35:59.696 [2024-12-07 10:10:28.180834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.696 [2024-12-07 10:10:28.180892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.696 [2024-12-07 10:10:28.180907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.696 [2024-12-07 10:10:28.180913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.696 [2024-12-07 10:10:28.180919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.696 [2024-12-07 10:10:28.180934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.696 qpair failed and we were unable to recover it. 00:35:59.696 [2024-12-07 10:10:28.190869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.696 [2024-12-07 10:10:28.190933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.696 [2024-12-07 10:10:28.190951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.696 [2024-12-07 10:10:28.190959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.696 [2024-12-07 10:10:28.190965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.696 [2024-12-07 10:10:28.190980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.696 qpair failed and we were unable to recover it. 00:35:59.696 [2024-12-07 10:10:28.200862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.696 [2024-12-07 10:10:28.200921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.696 [2024-12-07 10:10:28.200936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.696 [2024-12-07 10:10:28.200942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.696 [2024-12-07 10:10:28.200952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.696 [2024-12-07 10:10:28.200966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.696 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.210908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.210971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.210986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.210993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.210999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.211013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.220937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.221043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.221063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.221070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.221076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.221092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.230987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.231054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.231067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.231074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.231080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.231095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.241005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.241069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.241084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.241090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.241097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.241112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.251029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.251086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.251101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.251107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.251113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.251127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.260987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.261042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.261056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.261063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.261069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.261087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.271111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.271178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.271191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.271198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.271204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.271219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.281114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.281175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.281189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.281195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.281201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.281215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.291166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.291221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.291234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.291241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.291246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.291260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.301167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.301235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.301251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.301258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.301264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.301278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.311212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.311270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.311288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.311294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.311300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.311315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.321265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.321370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.321385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.321392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.321398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.321412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.331255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.331313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.331327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.697 [2024-12-07 10:10:28.331333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.697 [2024-12-07 10:10:28.331339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.697 [2024-12-07 10:10:28.331353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.697 qpair failed and we were unable to recover it. 00:35:59.697 [2024-12-07 10:10:28.341278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.697 [2024-12-07 10:10:28.341353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.697 [2024-12-07 10:10:28.341367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.698 [2024-12-07 10:10:28.341374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.698 [2024-12-07 10:10:28.341380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.698 [2024-12-07 10:10:28.341395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.698 qpair failed and we were unable to recover it. 00:35:59.698 [2024-12-07 10:10:28.351292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.698 [2024-12-07 10:10:28.351353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.698 [2024-12-07 10:10:28.351367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.698 [2024-12-07 10:10:28.351374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.698 [2024-12-07 10:10:28.351384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.698 [2024-12-07 10:10:28.351399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.698 qpair failed and we were unable to recover it. 00:35:59.698 [2024-12-07 10:10:28.361349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.698 [2024-12-07 10:10:28.361415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.698 [2024-12-07 10:10:28.361430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.698 [2024-12-07 10:10:28.361437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.698 [2024-12-07 10:10:28.361443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.698 [2024-12-07 10:10:28.361457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.698 qpair failed and we were unable to recover it. 00:35:59.698 [2024-12-07 10:10:28.371378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.698 [2024-12-07 10:10:28.371436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.698 [2024-12-07 10:10:28.371450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.698 [2024-12-07 10:10:28.371456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.698 [2024-12-07 10:10:28.371462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.698 [2024-12-07 10:10:28.371476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.698 qpair failed and we were unable to recover it. 00:35:59.698 [2024-12-07 10:10:28.381381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.698 [2024-12-07 10:10:28.381442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.698 [2024-12-07 10:10:28.381455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.698 [2024-12-07 10:10:28.381462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.698 [2024-12-07 10:10:28.381468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.698 [2024-12-07 10:10:28.381482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.698 qpair failed and we were unable to recover it. 00:35:59.698 [2024-12-07 10:10:28.391450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.698 [2024-12-07 10:10:28.391511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.698 [2024-12-07 10:10:28.391525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.698 [2024-12-07 10:10:28.391532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.698 [2024-12-07 10:10:28.391538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.698 [2024-12-07 10:10:28.391552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.698 qpair failed and we were unable to recover it. 00:35:59.698 [2024-12-07 10:10:28.401470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.698 [2024-12-07 10:10:28.401535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.698 [2024-12-07 10:10:28.401549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.698 [2024-12-07 10:10:28.401556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.698 [2024-12-07 10:10:28.401562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.698 [2024-12-07 10:10:28.401576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.698 qpair failed and we were unable to recover it. 00:35:59.698 [2024-12-07 10:10:28.411501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.698 [2024-12-07 10:10:28.411561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.698 [2024-12-07 10:10:28.411575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.698 [2024-12-07 10:10:28.411581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.698 [2024-12-07 10:10:28.411587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2159010 00:35:59.698 [2024-12-07 10:10:28.411601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.698 qpair failed and we were unable to recover it. 00:35:59.956 [2024-12-07 10:10:28.421516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-12-07 10:10:28.421580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-12-07 10:10:28.421605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-12-07 10:10:28.421613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-12-07 10:10:28.421621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbfc000b90 00:35:59.956 [2024-12-07 10:10:28.421638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-12-07 10:10:28.431542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-12-07 10:10:28.431606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-12-07 10:10:28.431621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-12-07 10:10:28.431628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-12-07 10:10:28.431634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efbfc000b90 00:35:59.956 [2024-12-07 10:10:28.431650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-12-07 10:10:28.431764] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:35:59.956 A controller has encountered a failure and is being reset. 00:35:59.956 [2024-12-07 10:10:28.441528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-12-07 10:10:28.441598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-12-07 10:10:28.441623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-12-07 10:10:28.441638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-12-07 10:10:28.441647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efc04000b90 00:35:59.956 [2024-12-07 10:10:28.441669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-12-07 10:10:28.451537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-12-07 10:10:28.451612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-12-07 10:10:28.451628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-12-07 10:10:28.451635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-12-07 10:10:28.451642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efc04000b90 00:35:59.956 [2024-12-07 10:10:28.451658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 Controller properly reset. 00:35:59.956 Initializing NVMe Controllers 00:35:59.956 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:59.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:59.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:59.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:59.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:59.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:59.956 Initialization complete. Launching workers. 00:35:59.956 Starting thread on core 1 00:35:59.956 Starting thread on core 2 00:35:59.956 Starting thread on core 3 00:35:59.956 Starting thread on core 0 00:35:59.956 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:59.956 00:35:59.956 real 0m10.698s 00:35:59.956 user 0m19.326s 00:35:59.956 sys 0m4.448s 00:35:59.956 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:59.956 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.956 ************************************ 00:35:59.956 END TEST nvmf_target_disconnect_tc2 00:35:59.956 ************************************ 00:35:59.956 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:59.956 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:59.956 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:59.956 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:59.956 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:59.956 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:59.956 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:59.956 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:59.957 rmmod nvme_tcp 00:35:59.957 rmmod nvme_fabrics 00:35:59.957 rmmod nvme_keyring 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 1472750 ']' 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 1472750 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1472750 ']' 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1472750 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472750 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472750' 00:35:59.957 killing process with pid 1472750 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1472750 00:35:59.957 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1472750 00:36:00.215 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:00.215 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:00.215 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:00.215 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:00.215 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:36:00.215 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:36:00.215 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:00.215 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:00.215 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:00.215 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.215 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.215 10:10:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.741 10:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:02.741 00:36:02.741 real 0m19.043s 00:36:02.741 user 0m46.561s 00:36:02.741 sys 0m9.058s 00:36:02.741 10:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:02.741 10:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:02.741 ************************************ 00:36:02.741 END TEST nvmf_target_disconnect 00:36:02.741 ************************************ 00:36:02.741 10:10:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:02.741 00:36:02.741 real 7m13.702s 00:36:02.741 user 16m44.721s 00:36:02.741 sys 2m3.023s 00:36:02.741 10:10:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:02.741 10:10:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.741 ************************************ 00:36:02.741 END TEST nvmf_host 00:36:02.741 ************************************ 00:36:02.741 10:10:31 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:02.741 10:10:31 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:02.741 10:10:31 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:02.741 10:10:31 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:02.741 10:10:31 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:02.741 10:10:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:02.741 ************************************ 00:36:02.741 START TEST nvmf_target_core_interrupt_mode 00:36:02.741 ************************************ 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:02.741 * Looking for test storage... 00:36:02.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:02.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.741 --rc genhtml_branch_coverage=1 00:36:02.741 --rc genhtml_function_coverage=1 00:36:02.741 --rc genhtml_legend=1 00:36:02.741 --rc geninfo_all_blocks=1 00:36:02.741 --rc geninfo_unexecuted_blocks=1 00:36:02.741 00:36:02.741 ' 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:02.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.741 --rc genhtml_branch_coverage=1 00:36:02.741 --rc genhtml_function_coverage=1 00:36:02.741 --rc genhtml_legend=1 00:36:02.741 --rc geninfo_all_blocks=1 00:36:02.741 --rc geninfo_unexecuted_blocks=1 00:36:02.741 00:36:02.741 ' 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:02.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.741 --rc genhtml_branch_coverage=1 00:36:02.741 --rc genhtml_function_coverage=1 00:36:02.741 --rc genhtml_legend=1 00:36:02.741 --rc geninfo_all_blocks=1 00:36:02.741 --rc geninfo_unexecuted_blocks=1 00:36:02.741 00:36:02.741 ' 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:02.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.741 --rc genhtml_branch_coverage=1 00:36:02.741 --rc genhtml_function_coverage=1 00:36:02.741 --rc genhtml_legend=1 00:36:02.741 --rc geninfo_all_blocks=1 00:36:02.741 --rc geninfo_unexecuted_blocks=1 00:36:02.741 00:36:02.741 ' 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:02.741 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:02.742 ************************************ 00:36:02.742 START TEST nvmf_abort 00:36:02.742 ************************************ 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:02.742 * Looking for test storage... 00:36:02.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:02.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.742 --rc genhtml_branch_coverage=1 00:36:02.742 --rc genhtml_function_coverage=1 00:36:02.742 --rc genhtml_legend=1 00:36:02.742 --rc geninfo_all_blocks=1 00:36:02.742 --rc geninfo_unexecuted_blocks=1 00:36:02.742 00:36:02.742 ' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:02.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.742 --rc genhtml_branch_coverage=1 00:36:02.742 --rc genhtml_function_coverage=1 00:36:02.742 --rc genhtml_legend=1 00:36:02.742 --rc geninfo_all_blocks=1 00:36:02.742 --rc geninfo_unexecuted_blocks=1 00:36:02.742 00:36:02.742 ' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:02.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.742 --rc genhtml_branch_coverage=1 00:36:02.742 --rc genhtml_function_coverage=1 00:36:02.742 --rc genhtml_legend=1 00:36:02.742 --rc geninfo_all_blocks=1 00:36:02.742 --rc geninfo_unexecuted_blocks=1 00:36:02.742 00:36:02.742 ' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:02.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.742 --rc genhtml_branch_coverage=1 00:36:02.742 --rc genhtml_function_coverage=1 00:36:02.742 --rc genhtml_legend=1 00:36:02.742 --rc geninfo_all_blocks=1 00:36:02.742 --rc geninfo_unexecuted_blocks=1 00:36:02.742 00:36:02.742 ' 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:02.742 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:03.000 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.270 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:08.270 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:08.270 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:08.270 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:08.270 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:08.270 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:08.270 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:08.270 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:08.271 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:08.271 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:08.271 Found net devices under 0000:86:00.0: cvl_0_0 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:08.271 Found net devices under 0000:86:00.1: cvl_0_1 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:08.271 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:08.530 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:08.530 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:08.530 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:08.530 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:08.530 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:08.530 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:08.530 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:08.530 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:08.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:08.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:36:08.530 00:36:08.531 --- 10.0.0.2 ping statistics --- 00:36:08.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.531 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:08.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:08.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:36:08.531 00:36:08.531 --- 10.0.0.1 ping statistics --- 00:36:08.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.531 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:08.531 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=1477283 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 1477283 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1477283 ']' 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.790 [2024-12-07 10:10:37.308879] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:08.790 [2024-12-07 10:10:37.309897] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:36:08.790 [2024-12-07 10:10:37.309935] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.790 [2024-12-07 10:10:37.369715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:08.790 [2024-12-07 10:10:37.411599] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:08.790 [2024-12-07 10:10:37.411637] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:08.790 [2024-12-07 10:10:37.411645] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:08.790 [2024-12-07 10:10:37.411651] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:08.790 [2024-12-07 10:10:37.411658] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:08.790 [2024-12-07 10:10:37.411704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:08.790 [2024-12-07 10:10:37.411790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:08.790 [2024-12-07 10:10:37.411792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:08.790 [2024-12-07 10:10:37.484704] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:08.790 [2024-12-07 10:10:37.484760] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:08.790 [2024-12-07 10:10:37.485031] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:08.790 [2024-12-07 10:10:37.485241] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:08.790 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.049 [2024-12-07 10:10:37.552303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.049 Malloc0 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.049 Delay0 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.049 [2024-12-07 10:10:37.612451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.049 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:09.049 [2024-12-07 10:10:37.715077] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:11.581 Initializing NVMe Controllers 00:36:11.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:11.581 controller IO queue size 128 less than required 00:36:11.581 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:11.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:11.581 Initialization complete. Launching workers. 00:36:11.581 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37151 00:36:11.581 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37208, failed to submit 66 00:36:11.581 success 37151, unsuccessful 57, failed 0 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:11.581 rmmod nvme_tcp 00:36:11.581 rmmod nvme_fabrics 00:36:11.581 rmmod nvme_keyring 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 1477283 ']' 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 1477283 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1477283 ']' 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1477283 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1477283 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1477283' 00:36:11.581 killing process with pid 1477283 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1477283 00:36:11.581 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1477283 00:36:11.581 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:11.581 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:11.581 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:11.581 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:11.581 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:36:11.581 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:11.581 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:36:11.581 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:11.581 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:11.581 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.581 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.581 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.490 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:13.490 00:36:13.490 real 0m10.857s 00:36:13.490 user 0m9.955s 00:36:13.490 sys 0m5.601s 00:36:13.490 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:13.490 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:13.490 ************************************ 00:36:13.490 END TEST nvmf_abort 00:36:13.490 ************************************ 00:36:13.490 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:13.490 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:13.490 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:13.490 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:13.749 ************************************ 00:36:13.749 START TEST nvmf_ns_hotplug_stress 00:36:13.749 ************************************ 00:36:13.749 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:13.749 * Looking for test storage... 00:36:13.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:13.749 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:13.749 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:36:13.749 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:13.749 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:13.749 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:13.749 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:13.749 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:13.749 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:13.749 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:13.749 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:13.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.750 --rc genhtml_branch_coverage=1 00:36:13.750 --rc genhtml_function_coverage=1 00:36:13.750 --rc genhtml_legend=1 00:36:13.750 --rc geninfo_all_blocks=1 00:36:13.750 --rc geninfo_unexecuted_blocks=1 00:36:13.750 00:36:13.750 ' 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:13.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.750 --rc genhtml_branch_coverage=1 00:36:13.750 --rc genhtml_function_coverage=1 00:36:13.750 --rc genhtml_legend=1 00:36:13.750 --rc geninfo_all_blocks=1 00:36:13.750 --rc geninfo_unexecuted_blocks=1 00:36:13.750 00:36:13.750 ' 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:13.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.750 --rc genhtml_branch_coverage=1 00:36:13.750 --rc genhtml_function_coverage=1 00:36:13.750 --rc genhtml_legend=1 00:36:13.750 --rc geninfo_all_blocks=1 00:36:13.750 --rc geninfo_unexecuted_blocks=1 00:36:13.750 00:36:13.750 ' 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:13.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.750 --rc genhtml_branch_coverage=1 00:36:13.750 --rc genhtml_function_coverage=1 00:36:13.750 --rc genhtml_legend=1 00:36:13.750 --rc geninfo_all_blocks=1 00:36:13.750 --rc geninfo_unexecuted_blocks=1 00:36:13.750 00:36:13.750 ' 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:13.750 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:13.751 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:13.751 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:13.751 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:13.751 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:13.751 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:13.751 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.751 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:13.751 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.751 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:13.751 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:13.751 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:13.751 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:20.314 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:20.314 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:20.314 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:20.315 Found net devices under 0000:86:00.0: cvl_0_0 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:20.315 Found net devices under 0000:86:00.1: cvl_0_1 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:20.315 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:20.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:20.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:36:20.315 00:36:20.315 --- 10.0.0.2 ping statistics --- 00:36:20.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:20.315 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:20.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:20.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:36:20.315 00:36:20.315 --- 10.0.0.1 ping statistics --- 00:36:20.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:20.315 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=1481261 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 1481261 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1481261 ']' 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:20.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:20.315 [2024-12-07 10:10:48.165787] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:20.315 [2024-12-07 10:10:48.166680] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:36:20.315 [2024-12-07 10:10:48.166714] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:20.315 [2024-12-07 10:10:48.225454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:20.315 [2024-12-07 10:10:48.266904] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:20.315 [2024-12-07 10:10:48.266943] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:20.315 [2024-12-07 10:10:48.266965] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:20.315 [2024-12-07 10:10:48.266993] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:20.315 [2024-12-07 10:10:48.266999] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:20.315 [2024-12-07 10:10:48.267045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:20.315 [2024-12-07 10:10:48.267129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:20.315 [2024-12-07 10:10:48.267130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:20.315 [2024-12-07 10:10:48.336298] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:20.315 [2024-12-07 10:10:48.336332] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:20.315 [2024-12-07 10:10:48.336531] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:20.315 [2024-12-07 10:10:48.336644] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:36:20.315 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:20.316 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:20.316 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:20.316 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:20.316 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:20.316 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:20.316 [2024-12-07 10:10:48.567849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.316 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:20.316 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:20.316 [2024-12-07 10:10:48.944153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.316 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:20.576 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:20.835 Malloc0 00:36:20.835 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:20.835 Delay0 00:36:20.835 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.095 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:21.354 NULL1 00:36:21.354 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:21.614 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:21.614 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1481530 00:36:21.614 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:21.614 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.614 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.873 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:21.873 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:22.133 true 00:36:22.133 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:22.134 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:22.393 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.652 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:22.652 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:22.652 true 00:36:22.652 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:22.652 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.610 Read completed with error (sct=0, sc=11) 00:36:23.868 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.868 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:23.868 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:24.126 true 00:36:24.126 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:24.126 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.385 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.643 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:24.643 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:24.643 true 00:36:24.643 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:24.643 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:26.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.017 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.017 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:26.017 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:26.276 true 00:36:26.276 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:26.276 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.210 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:27.210 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.210 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:27.211 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:27.469 true 00:36:27.469 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:27.469 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.728 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.728 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:27.728 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:27.988 true 00:36:27.988 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:27.988 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.368 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.368 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:29.368 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:29.627 true 00:36:29.627 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:29.627 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.887 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.147 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:30.147 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:30.147 true 00:36:30.147 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:30.147 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:31.346 10:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:31.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:31.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:31.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:31.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:31.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:31.347 10:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:31.347 10:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:31.606 true 00:36:31.606 10:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:31.606 10:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:32.539 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:32.796 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:32.796 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:32.796 true 00:36:32.796 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:32.796 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.054 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.311 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:33.311 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:33.568 true 00:36:33.568 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:33.568 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.499 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.757 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:34.757 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:35.015 true 00:36:35.015 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:35.015 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.948 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:35.948 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:35.948 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:36.205 true 00:36:36.205 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:36.205 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.463 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.720 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:36.720 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:36.978 true 00:36:36.978 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:36.978 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.910 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:38.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:38.168 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:38.168 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:38.426 true 00:36:38.426 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:38.426 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.438 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.438 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:39.438 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:39.438 true 00:36:39.696 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:39.696 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.696 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.954 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:39.954 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:40.212 true 00:36:40.212 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:40.212 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.146 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.404 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:41.404 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:41.661 true 00:36:41.661 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:41.661 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:42.595 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.595 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:42.595 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:42.853 true 00:36:42.853 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:42.853 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.111 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.368 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:43.368 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:43.368 true 00:36:43.368 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:43.368 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.743 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.743 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:44.743 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:45.002 true 00:36:45.002 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:45.002 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.940 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.940 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:45.940 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:46.200 true 00:36:46.200 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:46.200 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.460 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:46.460 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:46.718 true 00:36:46.718 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:46.718 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.089 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.089 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:48.089 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:48.089 true 00:36:48.089 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:48.089 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.348 10:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.606 10:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:48.606 10:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:48.865 true 00:36:48.865 10:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:48.865 10:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:49.797 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:50.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:50.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:50.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:50.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:50.056 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:50.056 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:50.315 true 00:36:50.315 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:50.315 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.250 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.250 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:51.250 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:51.509 true 00:36:51.509 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:51.509 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.767 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.767 Initializing NVMe Controllers 00:36:51.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:51.767 Controller IO queue size 128, less than required. 00:36:51.767 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:51.767 Controller IO queue size 128, less than required. 00:36:51.767 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:51.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:51.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:51.767 Initialization complete. Launching workers. 00:36:51.767 ======================================================== 00:36:51.767 Latency(us) 00:36:51.767 Device Information : IOPS MiB/s Average min max 00:36:51.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1730.24 0.84 47780.72 2286.28 1016888.91 00:36:51.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16798.49 8.20 7600.18 1132.89 379918.53 00:36:51.767 ======================================================== 00:36:51.767 Total : 18528.73 9.05 11352.31 1132.89 1016888.91 00:36:51.767 00:36:52.026 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:52.026 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:52.026 true 00:36:52.026 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1481530 00:36:52.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1481530) - No such process 00:36:52.285 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1481530 00:36:52.285 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.285 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:52.543 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:52.543 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:52.543 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:52.543 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:52.543 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:52.802 null0 00:36:52.802 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:52.802 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:52.802 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:52.802 null1 00:36:53.061 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:53.061 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:53.061 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:53.061 null2 00:36:53.061 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:53.061 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:53.061 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:53.320 null3 00:36:53.320 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:53.320 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:53.320 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:53.579 null4 00:36:53.579 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:53.579 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:53.579 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:53.579 null5 00:36:53.579 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:53.579 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:53.579 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:53.838 null6 00:36:53.838 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:53.838 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:53.838 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:54.097 null7 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1486728 1486731 1486733 1486736 1486739 1486742 1486745 1486748 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.098 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:54.358 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.358 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:54.358 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:54.358 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:54.358 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:54.358 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:54.358 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:54.358 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:54.358 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.358 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.358 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:54.619 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:54.893 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.893 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.893 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.894 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:55.153 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.153 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:55.153 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:55.153 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:55.153 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:55.153 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:55.153 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:55.153 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.413 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:55.413 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.413 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.673 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.674 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:55.933 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:55.933 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:55.933 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:55.933 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:55.933 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.933 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:55.933 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:55.933 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.192 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:56.451 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:56.451 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:56.452 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:56.452 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:56.452 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:56.452 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:56.452 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:56.452 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.452 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.452 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.452 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:56.709 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:56.710 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:56.710 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:56.710 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:56.710 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:56.710 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.710 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.967 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:57.225 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:57.225 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:57.225 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:57.225 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.225 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:57.225 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:57.225 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:57.225 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.484 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.742 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:58.000 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.000 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.000 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:58.001 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:58.001 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:58.001 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:58.001 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:58.001 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:58.001 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.001 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:58.001 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:58.263 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:58.264 rmmod nvme_tcp 00:36:58.264 rmmod nvme_fabrics 00:36:58.264 rmmod nvme_keyring 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 1481261 ']' 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 1481261 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1481261 ']' 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1481261 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:58.264 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1481261 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1481261' 00:36:58.525 killing process with pid 1481261 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1481261 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1481261 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:58.525 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:01.059 00:37:01.059 real 0m47.058s 00:37:01.059 user 2m56.511s 00:37:01.059 sys 0m20.450s 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:01.059 ************************************ 00:37:01.059 END TEST nvmf_ns_hotplug_stress 00:37:01.059 ************************************ 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:01.059 ************************************ 00:37:01.059 START TEST nvmf_delete_subsystem 00:37:01.059 ************************************ 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:01.059 * Looking for test storage... 00:37:01.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:01.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.059 --rc genhtml_branch_coverage=1 00:37:01.059 --rc genhtml_function_coverage=1 00:37:01.059 --rc genhtml_legend=1 00:37:01.059 --rc geninfo_all_blocks=1 00:37:01.059 --rc geninfo_unexecuted_blocks=1 00:37:01.059 00:37:01.059 ' 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:01.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.059 --rc genhtml_branch_coverage=1 00:37:01.059 --rc genhtml_function_coverage=1 00:37:01.059 --rc genhtml_legend=1 00:37:01.059 --rc geninfo_all_blocks=1 00:37:01.059 --rc geninfo_unexecuted_blocks=1 00:37:01.059 00:37:01.059 ' 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:01.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.059 --rc genhtml_branch_coverage=1 00:37:01.059 --rc genhtml_function_coverage=1 00:37:01.059 --rc genhtml_legend=1 00:37:01.059 --rc geninfo_all_blocks=1 00:37:01.059 --rc geninfo_unexecuted_blocks=1 00:37:01.059 00:37:01.059 ' 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:01.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.059 --rc genhtml_branch_coverage=1 00:37:01.059 --rc genhtml_function_coverage=1 00:37:01.059 --rc genhtml_legend=1 00:37:01.059 --rc geninfo_all_blocks=1 00:37:01.059 --rc geninfo_unexecuted_blocks=1 00:37:01.059 00:37:01.059 ' 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:01.059 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:01.060 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:06.335 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:06.335 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:06.335 Found net devices under 0000:86:00.0: cvl_0_0 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:06.335 Found net devices under 0000:86:00.1: cvl_0_1 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:06.335 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:06.336 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:06.336 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:06.336 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:06.336 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:06.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:06.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:37:06.595 00:37:06.595 --- 10.0.0.2 ping statistics --- 00:37:06.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:06.595 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:06.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:06.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:37:06.595 00:37:06.595 --- 10.0.0.1 ping statistics --- 00:37:06.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:06.595 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=1491008 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 1491008 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1491008 ']' 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:06.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:06.595 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.595 [2024-12-07 10:11:35.150492] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:06.595 [2024-12-07 10:11:35.151425] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:37:06.595 [2024-12-07 10:11:35.151460] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:06.595 [2024-12-07 10:11:35.209717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:06.595 [2024-12-07 10:11:35.250725] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:06.595 [2024-12-07 10:11:35.250764] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:06.595 [2024-12-07 10:11:35.250771] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:06.595 [2024-12-07 10:11:35.250778] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:06.595 [2024-12-07 10:11:35.250784] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:06.595 [2024-12-07 10:11:35.250980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:06.595 [2024-12-07 10:11:35.250983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:06.595 [2024-12-07 10:11:35.313729] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:06.595 [2024-12-07 10:11:35.313986] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:06.595 [2024-12-07 10:11:35.314050] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.853 [2024-12-07 10:11:35.383809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.853 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.854 [2024-12-07 10:11:35.420043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.854 NULL1 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.854 Delay0 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1491042 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:06.854 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:06.854 [2024-12-07 10:11:35.504647] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:08.752 10:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:08.752 10:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.752 10:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:09.011 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 [2024-12-07 10:11:37.541474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374b50 is same with the state(6) to be set 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 starting I/O failed: -6 00:37:09.012 [2024-12-07 10:11:37.542084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8ea800d640 is same with the state(6) to be set 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Write completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.012 Read completed with error (sct=0, sc=8) 00:37:09.013 Write completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Write completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Write completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Write completed with error (sct=0, sc=8) 00:37:09.013 Write completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Write completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Write completed with error (sct=0, sc=8) 00:37:09.013 Write completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Write completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Write completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.013 Write completed with error (sct=0, sc=8) 00:37:09.013 Read completed with error (sct=0, sc=8) 00:37:09.947 [2024-12-07 10:11:38.518571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377a80 is same with the state(6) to be set 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Write completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Write completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Write completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Write completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Write completed with error (sct=0, sc=8) 00:37:09.947 Write completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Write completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Write completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.947 Write completed with error (sct=0, sc=8) 00:37:09.947 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 [2024-12-07 10:11:38.545402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374320 is same with the state(6) to be set 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 [2024-12-07 10:11:38.545652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374820 is same with the state(6) to be set 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 [2024-12-07 10:11:38.545815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374e80 is same with the state(6) to be set 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Write completed with error (sct=0, sc=8) 00:37:09.948 Read completed with error (sct=0, sc=8) 00:37:09.948 [2024-12-07 10:11:38.546697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8ea800d310 is same with the state(6) to be set 00:37:09.948 Initializing NVMe Controllers 00:37:09.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:09.948 Controller IO queue size 128, less than required. 00:37:09.948 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:09.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:09.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:09.948 Initialization complete. Launching workers. 00:37:09.948 ======================================================== 00:37:09.948 Latency(us) 00:37:09.948 Device Information : IOPS MiB/s Average min max 00:37:09.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.17 0.09 945831.81 918.00 1011795.85 00:37:09.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.92 0.08 867131.41 239.14 1012587.55 00:37:09.948 ======================================================== 00:37:09.948 Total : 352.09 0.17 910533.18 239.14 1012587.55 00:37:09.948 00:37:09.948 [2024-12-07 10:11:38.547137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1377a80 (9): Bad file descriptor 00:37:09.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:09.948 10:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.948 10:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:09.948 10:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1491042 00:37:09.948 10:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1491042 00:37:10.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1491042) - No such process 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1491042 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1491042 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1491042 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:10.514 [2024-12-07 10:11:39.079974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1491716 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491716 00:37:10.514 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:10.514 [2024-12-07 10:11:39.134740] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:11.081 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:11.081 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491716 00:37:11.081 10:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:11.648 10:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:11.648 10:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491716 00:37:11.648 10:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:11.907 10:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:11.907 10:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491716 00:37:11.907 10:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:12.475 10:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:12.475 10:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491716 00:37:12.475 10:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:13.043 10:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:13.043 10:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491716 00:37:13.043 10:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:13.609 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:13.609 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491716 00:37:13.609 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:13.866 Initializing NVMe Controllers 00:37:13.867 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:13.867 Controller IO queue size 128, less than required. 00:37:13.867 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:13.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:13.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:13.867 Initialization complete. Launching workers. 00:37:13.867 ======================================================== 00:37:13.867 Latency(us) 00:37:13.867 Device Information : IOPS MiB/s Average min max 00:37:13.867 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002829.54 1000218.05 1042338.38 00:37:13.867 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006580.77 1000176.12 1042692.10 00:37:13.867 ======================================================== 00:37:13.867 Total : 256.00 0.12 1004705.15 1000176.12 1042692.10 00:37:13.867 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491716 00:37:14.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1491716) - No such process 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1491716 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:14.126 rmmod nvme_tcp 00:37:14.126 rmmod nvme_fabrics 00:37:14.126 rmmod nvme_keyring 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 1491008 ']' 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 1491008 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1491008 ']' 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1491008 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1491008 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1491008' 00:37:14.126 killing process with pid 1491008 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1491008 00:37:14.126 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1491008 00:37:14.388 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:14.388 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:14.388 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:14.388 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:14.388 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:37:14.388 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:14.388 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:37:14.388 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:14.388 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:14.388 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:14.388 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:14.388 10:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:16.292 10:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:16.292 00:37:16.292 real 0m15.641s 00:37:16.292 user 0m25.661s 00:37:16.292 sys 0m5.930s 00:37:16.292 10:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:16.292 10:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:16.292 ************************************ 00:37:16.292 END TEST nvmf_delete_subsystem 00:37:16.292 ************************************ 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:16.558 ************************************ 00:37:16.558 START TEST nvmf_host_management 00:37:16.558 ************************************ 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:16.558 * Looking for test storage... 00:37:16.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:16.558 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:16.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.559 --rc genhtml_branch_coverage=1 00:37:16.559 --rc genhtml_function_coverage=1 00:37:16.559 --rc genhtml_legend=1 00:37:16.559 --rc geninfo_all_blocks=1 00:37:16.559 --rc geninfo_unexecuted_blocks=1 00:37:16.559 00:37:16.559 ' 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:16.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.559 --rc genhtml_branch_coverage=1 00:37:16.559 --rc genhtml_function_coverage=1 00:37:16.559 --rc genhtml_legend=1 00:37:16.559 --rc geninfo_all_blocks=1 00:37:16.559 --rc geninfo_unexecuted_blocks=1 00:37:16.559 00:37:16.559 ' 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:16.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.559 --rc genhtml_branch_coverage=1 00:37:16.559 --rc genhtml_function_coverage=1 00:37:16.559 --rc genhtml_legend=1 00:37:16.559 --rc geninfo_all_blocks=1 00:37:16.559 --rc geninfo_unexecuted_blocks=1 00:37:16.559 00:37:16.559 ' 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:16.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.559 --rc genhtml_branch_coverage=1 00:37:16.559 --rc genhtml_function_coverage=1 00:37:16.559 --rc genhtml_legend=1 00:37:16.559 --rc geninfo_all_blocks=1 00:37:16.559 --rc geninfo_unexecuted_blocks=1 00:37:16.559 00:37:16.559 ' 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:16.559 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:16.560 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:16.819 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:16.819 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:16.819 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:16.819 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:22.081 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:22.081 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:22.081 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:22.082 Found net devices under 0000:86:00.0: cvl_0_0 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:22.082 Found net devices under 0000:86:00.1: cvl_0_1 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:22.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:22.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:37:22.082 00:37:22.082 --- 10.0.0.2 ping statistics --- 00:37:22.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:22.082 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:22.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:22.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:37:22.082 00:37:22.082 --- 10.0.0.1 ping statistics --- 00:37:22.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:22.082 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=1495692 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 1495692 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1495692 ']' 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:22.082 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.340 [2024-12-07 10:11:50.843644] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:22.340 [2024-12-07 10:11:50.844611] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:37:22.340 [2024-12-07 10:11:50.844650] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:22.341 [2024-12-07 10:11:50.904771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:22.341 [2024-12-07 10:11:50.947244] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:22.341 [2024-12-07 10:11:50.947282] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:22.341 [2024-12-07 10:11:50.947289] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:22.341 [2024-12-07 10:11:50.947295] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:22.341 [2024-12-07 10:11:50.947301] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:22.341 [2024-12-07 10:11:50.947406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:22.341 [2024-12-07 10:11:50.947491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:22.341 [2024-12-07 10:11:50.947599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.341 [2024-12-07 10:11:50.947601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:37:22.341 [2024-12-07 10:11:51.019493] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:22.341 [2024-12-07 10:11:51.019668] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:22.341 [2024-12-07 10:11:51.020157] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:22.341 [2024-12-07 10:11:51.020167] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:22.341 [2024-12-07 10:11:51.020460] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:22.341 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:22.341 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:22.341 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:22.341 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:22.341 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.599 [2024-12-07 10:11:51.088267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.599 Malloc0 00:37:22.599 [2024-12-07 10:11:51.156316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1495745 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1495745 /var/tmp/bdevperf.sock 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1495745 ']' 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:22.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:22.599 { 00:37:22.599 "params": { 00:37:22.599 "name": "Nvme$subsystem", 00:37:22.599 "trtype": "$TEST_TRANSPORT", 00:37:22.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.599 "adrfam": "ipv4", 00:37:22.599 "trsvcid": "$NVMF_PORT", 00:37:22.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.599 "hdgst": ${hdgst:-false}, 00:37:22.599 "ddgst": ${ddgst:-false} 00:37:22.599 }, 00:37:22.599 "method": "bdev_nvme_attach_controller" 00:37:22.599 } 00:37:22.599 EOF 00:37:22.599 )") 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:37:22.599 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:22.599 "params": { 00:37:22.599 "name": "Nvme0", 00:37:22.599 "trtype": "tcp", 00:37:22.599 "traddr": "10.0.0.2", 00:37:22.599 "adrfam": "ipv4", 00:37:22.599 "trsvcid": "4420", 00:37:22.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.599 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.599 "hdgst": false, 00:37:22.599 "ddgst": false 00:37:22.599 }, 00:37:22.599 "method": "bdev_nvme_attach_controller" 00:37:22.599 }' 00:37:22.599 [2024-12-07 10:11:51.251660] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:37:22.599 [2024-12-07 10:11:51.251709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495745 ] 00:37:22.599 [2024-12-07 10:11:51.308714] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.858 [2024-12-07 10:11:51.348840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:22.858 Running I/O for 10 seconds... 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:23.117 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.377 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.377 [2024-12-07 10:11:51.980108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.377 [2024-12-07 10:11:51.980312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.980432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1bb0 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.984444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:23.378 [2024-12-07 10:11:51.984481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:23.378 [2024-12-07 10:11:51.984501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:23.378 [2024-12-07 10:11:51.984517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:23.378 [2024-12-07 10:11:51.984531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x767800 is same with the state(6) to be set 00:37:23.378 [2024-12-07 10:11:51.984590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.378 [2024-12-07 10:11:51.984983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.378 [2024-12-07 10:11:51.984993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.379 [2024-12-07 10:11:51.985087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:23.379 [2024-12-07 10:11:51.985478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.379 [2024-12-07 10:11:51.985579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.379 [2024-12-07 10:11:51.985586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.380 [2024-12-07 10:11:51.985657] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x763b20 was disconnected and freed. reset controller. 00:37:23.380 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.380 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.380 [2024-12-07 10:11:51.986601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:23.380 task offset: 92416 on job bdev=Nvme0n1 fails 00:37:23.380 00:37:23.380 Latency(us) 00:37:23.380 [2024-12-07T09:11:52.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.380 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:23.380 Job: Nvme0n1 ended in about 0.41 seconds with error 00:37:23.380 Verification LBA range: start 0x0 length 0x400 00:37:23.380 Nvme0n1 : 0.41 1773.52 110.85 157.21 0.00 32265.26 1517.30 28265.96 00:37:23.380 [2024-12-07T09:11:52.106Z] =================================================================================================================== 00:37:23.380 [2024-12-07T09:11:52.106Z] Total : 1773.52 110.85 157.21 0.00 32265.26 1517.30 28265.96 00:37:23.380 [2024-12-07 10:11:51.988988] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:23.380 [2024-12-07 10:11:51.989010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x767800 (9): Bad file descriptor 00:37:23.380 [2024-12-07 10:11:51.990056] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:23.380 [2024-12-07 10:11:51.990132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:23.380 [2024-12-07 10:11:51.990155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:23.380 [2024-12-07 10:11:51.990170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:23.380 [2024-12-07 10:11:51.990179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:23.380 [2024-12-07 10:11:51.990187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-12-07 10:11:51.990193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x767800 00:37:23.380 [2024-12-07 10:11:51.990212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x767800 (9): Bad file descriptor 00:37:23.380 [2024-12-07 10:11:51.990224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:23.380 [2024-12-07 10:11:51.990231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:23.380 [2024-12-07 10:11:51.990240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:23.380 [2024-12-07 10:11:51.990253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:23.380 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.380 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:24.315 10:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1495745 00:37:24.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1495745) - No such process 00:37:24.315 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:24.315 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:24.315 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:24.315 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:24.315 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:37:24.315 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:37:24.315 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:24.315 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:24.315 { 00:37:24.315 "params": { 00:37:24.315 "name": "Nvme$subsystem", 00:37:24.315 "trtype": "$TEST_TRANSPORT", 00:37:24.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:24.315 "adrfam": "ipv4", 00:37:24.315 "trsvcid": "$NVMF_PORT", 00:37:24.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:24.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:24.315 "hdgst": ${hdgst:-false}, 00:37:24.315 "ddgst": ${ddgst:-false} 00:37:24.315 }, 00:37:24.315 "method": "bdev_nvme_attach_controller" 00:37:24.315 } 00:37:24.315 EOF 00:37:24.315 )") 00:37:24.315 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:37:24.315 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:37:24.315 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:37:24.315 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:24.315 "params": { 00:37:24.315 "name": "Nvme0", 00:37:24.315 "trtype": "tcp", 00:37:24.315 "traddr": "10.0.0.2", 00:37:24.315 "adrfam": "ipv4", 00:37:24.315 "trsvcid": "4420", 00:37:24.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:24.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:24.315 "hdgst": false, 00:37:24.315 "ddgst": false 00:37:24.315 }, 00:37:24.315 "method": "bdev_nvme_attach_controller" 00:37:24.315 }' 00:37:24.574 [2024-12-07 10:11:53.053847] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:37:24.574 [2024-12-07 10:11:53.053894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496038 ] 00:37:24.574 [2024-12-07 10:11:53.108282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.574 [2024-12-07 10:11:53.148528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.833 Running I/O for 1 seconds... 00:37:25.770 1856.00 IOPS, 116.00 MiB/s 00:37:25.770 Latency(us) 00:37:25.770 [2024-12-07T09:11:54.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.770 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:25.770 Verification LBA range: start 0x0 length 0x400 00:37:25.770 Nvme0n1 : 1.01 1898.62 118.66 0.00 0.00 33188.44 6895.53 27924.03 00:37:25.770 [2024-12-07T09:11:54.496Z] =================================================================================================================== 00:37:25.770 [2024-12-07T09:11:54.496Z] Total : 1898.62 118.66 0.00 0.00 33188.44 6895.53 27924.03 00:37:26.030 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:26.030 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:26.030 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:26.030 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:26.030 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:26.030 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:26.030 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:26.030 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:26.030 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:26.030 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:26.030 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:26.030 rmmod nvme_tcp 00:37:26.030 rmmod nvme_fabrics 00:37:26.030 rmmod nvme_keyring 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 1495692 ']' 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 1495692 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1495692 ']' 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1495692 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1495692 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1495692' 00:37:26.031 killing process with pid 1495692 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1495692 00:37:26.031 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1495692 00:37:26.292 [2024-12-07 10:11:54.878287] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:26.292 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:26.292 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:26.292 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:26.292 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:26.292 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:37:26.292 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:26.292 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:37:26.292 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:26.292 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:26.292 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.292 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:26.292 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:28.826 10:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:28.826 10:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:28.826 00:37:28.826 real 0m11.917s 00:37:28.826 user 0m17.741s 00:37:28.826 sys 0m5.996s 00:37:28.826 10:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:28.826 10:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:28.826 ************************************ 00:37:28.826 END TEST nvmf_host_management 00:37:28.826 ************************************ 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:28.826 ************************************ 00:37:28.826 START TEST nvmf_lvol 00:37:28.826 ************************************ 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:28.826 * Looking for test storage... 00:37:28.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:28.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.826 --rc genhtml_branch_coverage=1 00:37:28.826 --rc genhtml_function_coverage=1 00:37:28.826 --rc genhtml_legend=1 00:37:28.826 --rc geninfo_all_blocks=1 00:37:28.826 --rc geninfo_unexecuted_blocks=1 00:37:28.826 00:37:28.826 ' 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:28.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.826 --rc genhtml_branch_coverage=1 00:37:28.826 --rc genhtml_function_coverage=1 00:37:28.826 --rc genhtml_legend=1 00:37:28.826 --rc geninfo_all_blocks=1 00:37:28.826 --rc geninfo_unexecuted_blocks=1 00:37:28.826 00:37:28.826 ' 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:28.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.826 --rc genhtml_branch_coverage=1 00:37:28.826 --rc genhtml_function_coverage=1 00:37:28.826 --rc genhtml_legend=1 00:37:28.826 --rc geninfo_all_blocks=1 00:37:28.826 --rc geninfo_unexecuted_blocks=1 00:37:28.826 00:37:28.826 ' 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:28.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.826 --rc genhtml_branch_coverage=1 00:37:28.826 --rc genhtml_function_coverage=1 00:37:28.826 --rc genhtml_legend=1 00:37:28.826 --rc geninfo_all_blocks=1 00:37:28.826 --rc geninfo_unexecuted_blocks=1 00:37:28.826 00:37:28.826 ' 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:28.826 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:28.827 10:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:34.107 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:34.107 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:34.107 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:34.108 Found net devices under 0000:86:00.0: cvl_0_0 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:34.108 Found net devices under 0000:86:00.1: cvl_0_1 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:34.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:34.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:37:34.108 00:37:34.108 --- 10.0.0.2 ping statistics --- 00:37:34.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:34.108 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:34.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:34.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:37:34.108 00:37:34.108 --- 10.0.0.1 ping statistics --- 00:37:34.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:34.108 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=1499863 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 1499863 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1499863 ']' 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:34.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:34.108 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:34.108 [2024-12-07 10:12:02.659914] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:34.108 [2024-12-07 10:12:02.660826] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:37:34.108 [2024-12-07 10:12:02.660858] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:34.108 [2024-12-07 10:12:02.718573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:34.108 [2024-12-07 10:12:02.760320] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:34.108 [2024-12-07 10:12:02.760358] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:34.108 [2024-12-07 10:12:02.760365] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:34.108 [2024-12-07 10:12:02.760371] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:34.108 [2024-12-07 10:12:02.760376] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:34.108 [2024-12-07 10:12:02.760424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:34.108 [2024-12-07 10:12:02.760524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:34.108 [2024-12-07 10:12:02.760526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:34.368 [2024-12-07 10:12:02.832638] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:34.368 [2024-12-07 10:12:02.832737] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:34.368 [2024-12-07 10:12:02.832798] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:34.368 [2024-12-07 10:12:02.832926] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:34.368 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:34.368 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:37:34.368 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:34.368 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:34.368 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:34.368 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:34.368 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:34.368 [2024-12-07 10:12:03.077091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:34.627 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:34.627 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:34.627 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:34.888 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:34.888 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:35.148 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:35.408 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d7a622c1-9377-47b6-8625-885810e1250e 00:37:35.408 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d7a622c1-9377-47b6-8625-885810e1250e lvol 20 00:37:35.408 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0dc3e56b-5155-412a-8b2b-93e8095ae9bb 00:37:35.408 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:35.668 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0dc3e56b-5155-412a-8b2b-93e8095ae9bb 00:37:35.927 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:36.187 [2024-12-07 10:12:04.673126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.187 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:36.187 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1500346 00:37:36.187 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:36.187 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:37.563 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0dc3e56b-5155-412a-8b2b-93e8095ae9bb MY_SNAPSHOT 00:37:37.563 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1527297d-9acf-4156-a89e-4375c0d2461a 00:37:37.563 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0dc3e56b-5155-412a-8b2b-93e8095ae9bb 30 00:37:37.821 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1527297d-9acf-4156-a89e-4375c0d2461a MY_CLONE 00:37:38.079 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c34a2b88-8b02-4f25-aecb-035a84d23650 00:37:38.079 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c34a2b88-8b02-4f25-aecb-035a84d23650 00:37:38.646 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1500346 00:37:46.761 Initializing NVMe Controllers 00:37:46.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:46.761 Controller IO queue size 128, less than required. 00:37:46.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:46.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:46.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:46.761 Initialization complete. Launching workers. 00:37:46.761 ======================================================== 00:37:46.761 Latency(us) 00:37:46.761 Device Information : IOPS MiB/s Average min max 00:37:46.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12170.34 47.54 10518.38 2129.57 59830.37 00:37:46.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12020.44 46.95 10646.78 4584.69 55020.78 00:37:46.761 ======================================================== 00:37:46.761 Total : 24190.78 94.50 10582.19 2129.57 59830.37 00:37:46.761 00:37:46.761 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:46.761 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0dc3e56b-5155-412a-8b2b-93e8095ae9bb 00:37:47.018 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d7a622c1-9377-47b6-8625-885810e1250e 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:47.276 rmmod nvme_tcp 00:37:47.276 rmmod nvme_fabrics 00:37:47.276 rmmod nvme_keyring 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 1499863 ']' 00:37:47.276 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 1499863 00:37:47.277 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1499863 ']' 00:37:47.277 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1499863 00:37:47.277 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:37:47.277 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:47.277 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1499863 00:37:47.277 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:47.277 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:47.277 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1499863' 00:37:47.277 killing process with pid 1499863 00:37:47.277 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1499863 00:37:47.277 10:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1499863 00:37:47.535 10:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:47.535 10:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:47.535 10:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:47.535 10:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:47.535 10:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:37:47.535 10:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:47.535 10:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:37:47.535 10:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:47.535 10:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:47.535 10:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.535 10:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.535 10:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.074 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:50.074 00:37:50.074 real 0m21.168s 00:37:50.074 user 0m54.708s 00:37:50.074 sys 0m9.714s 00:37:50.074 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:50.074 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:50.074 ************************************ 00:37:50.074 END TEST nvmf_lvol 00:37:50.074 ************************************ 00:37:50.074 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:50.074 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:50.075 ************************************ 00:37:50.075 START TEST nvmf_lvs_grow 00:37:50.075 ************************************ 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:50.075 * Looking for test storage... 00:37:50.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:50.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.075 --rc genhtml_branch_coverage=1 00:37:50.075 --rc genhtml_function_coverage=1 00:37:50.075 --rc genhtml_legend=1 00:37:50.075 --rc geninfo_all_blocks=1 00:37:50.075 --rc geninfo_unexecuted_blocks=1 00:37:50.075 00:37:50.075 ' 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:50.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.075 --rc genhtml_branch_coverage=1 00:37:50.075 --rc genhtml_function_coverage=1 00:37:50.075 --rc genhtml_legend=1 00:37:50.075 --rc geninfo_all_blocks=1 00:37:50.075 --rc geninfo_unexecuted_blocks=1 00:37:50.075 00:37:50.075 ' 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:50.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.075 --rc genhtml_branch_coverage=1 00:37:50.075 --rc genhtml_function_coverage=1 00:37:50.075 --rc genhtml_legend=1 00:37:50.075 --rc geninfo_all_blocks=1 00:37:50.075 --rc geninfo_unexecuted_blocks=1 00:37:50.075 00:37:50.075 ' 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:50.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.075 --rc genhtml_branch_coverage=1 00:37:50.075 --rc genhtml_function_coverage=1 00:37:50.075 --rc genhtml_legend=1 00:37:50.075 --rc geninfo_all_blocks=1 00:37:50.075 --rc geninfo_unexecuted_blocks=1 00:37:50.075 00:37:50.075 ' 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:50.075 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:50.076 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:55.349 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:55.349 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:55.349 Found net devices under 0000:86:00.0: cvl_0_0 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:55.349 Found net devices under 0000:86:00.1: cvl_0_1 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:55.349 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:55.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:55.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:37:55.350 00:37:55.350 --- 10.0.0.2 ping statistics --- 00:37:55.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.350 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:55.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:55.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:37:55.350 00:37:55.350 --- 10.0.0.1 ping statistics --- 00:37:55.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.350 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=1505810 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 1505810 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1505810 ']' 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:55.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.350 [2024-12-07 10:12:23.651068] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:55.350 [2024-12-07 10:12:23.652035] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:37:55.350 [2024-12-07 10:12:23.652071] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:55.350 [2024-12-07 10:12:23.710995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.350 [2024-12-07 10:12:23.751471] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:55.350 [2024-12-07 10:12:23.751511] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:55.350 [2024-12-07 10:12:23.751518] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:55.350 [2024-12-07 10:12:23.751524] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:55.350 [2024-12-07 10:12:23.751529] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:55.350 [2024-12-07 10:12:23.751549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.350 [2024-12-07 10:12:23.812782] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:55.350 [2024-12-07 10:12:23.813023] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:55.350 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:55.350 [2024-12-07 10:12:24.043992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.350 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:55.350 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:55.350 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:55.350 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.610 ************************************ 00:37:55.610 START TEST lvs_grow_clean 00:37:55.610 ************************************ 00:37:55.610 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:37:55.610 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:55.610 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:55.610 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:55.610 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:55.610 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:55.610 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:55.610 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.610 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.610 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:55.610 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:55.610 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:55.869 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=db509c50-edfc-4271-be2b-3568ba882512 00:37:55.869 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db509c50-edfc-4271-be2b-3568ba882512 00:37:55.869 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:56.127 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:56.127 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:56.127 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u db509c50-edfc-4271-be2b-3568ba882512 lvol 150 00:37:56.386 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=908e905b-c515-401b-b0a6-2fc381b94c4b 00:37:56.386 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:56.386 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:56.386 [2024-12-07 10:12:25.051865] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:56.386 [2024-12-07 10:12:25.051937] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:56.386 true 00:37:56.386 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db509c50-edfc-4271-be2b-3568ba882512 00:37:56.386 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:56.643 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:56.643 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:56.900 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 908e905b-c515-401b-b0a6-2fc381b94c4b 00:37:56.900 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:57.158 [2024-12-07 10:12:25.792523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:57.158 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:57.416 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:57.416 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1506143 00:37:57.416 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:57.416 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1506143 /var/tmp/bdevperf.sock 00:37:57.416 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1506143 ']' 00:37:57.416 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:57.416 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:57.416 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:57.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:57.416 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:57.416 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:57.416 [2024-12-07 10:12:26.017968] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:37:57.416 [2024-12-07 10:12:26.018014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506143 ] 00:37:57.416 [2024-12-07 10:12:26.070906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.416 [2024-12-07 10:12:26.112908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:57.674 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:57.674 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:37:57.674 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:57.932 Nvme0n1 00:37:57.932 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:58.192 [ 00:37:58.192 { 00:37:58.192 "name": "Nvme0n1", 00:37:58.192 "aliases": [ 00:37:58.192 "908e905b-c515-401b-b0a6-2fc381b94c4b" 00:37:58.192 ], 00:37:58.192 "product_name": "NVMe disk", 00:37:58.192 "block_size": 4096, 00:37:58.192 "num_blocks": 38912, 00:37:58.192 "uuid": "908e905b-c515-401b-b0a6-2fc381b94c4b", 00:37:58.192 "numa_id": 1, 00:37:58.192 "assigned_rate_limits": { 00:37:58.192 "rw_ios_per_sec": 0, 00:37:58.192 "rw_mbytes_per_sec": 0, 00:37:58.192 "r_mbytes_per_sec": 0, 00:37:58.192 "w_mbytes_per_sec": 0 00:37:58.192 }, 00:37:58.192 "claimed": false, 00:37:58.192 "zoned": false, 00:37:58.192 "supported_io_types": { 00:37:58.192 "read": true, 00:37:58.192 "write": true, 00:37:58.192 "unmap": true, 00:37:58.192 "flush": true, 00:37:58.192 "reset": true, 00:37:58.192 "nvme_admin": true, 00:37:58.192 "nvme_io": true, 00:37:58.192 "nvme_io_md": false, 00:37:58.192 "write_zeroes": true, 00:37:58.192 "zcopy": false, 00:37:58.192 "get_zone_info": false, 00:37:58.192 "zone_management": false, 00:37:58.192 "zone_append": false, 00:37:58.192 "compare": true, 00:37:58.192 "compare_and_write": true, 00:37:58.192 "abort": true, 00:37:58.192 "seek_hole": false, 00:37:58.192 "seek_data": false, 00:37:58.192 "copy": true, 00:37:58.192 "nvme_iov_md": false 00:37:58.192 }, 00:37:58.192 "memory_domains": [ 00:37:58.192 { 00:37:58.192 "dma_device_id": "system", 00:37:58.192 "dma_device_type": 1 00:37:58.192 } 00:37:58.192 ], 00:37:58.192 "driver_specific": { 00:37:58.192 "nvme": [ 00:37:58.192 { 00:37:58.192 "trid": { 00:37:58.192 "trtype": "TCP", 00:37:58.192 "adrfam": "IPv4", 00:37:58.192 "traddr": "10.0.0.2", 00:37:58.192 "trsvcid": "4420", 00:37:58.192 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:58.192 }, 00:37:58.192 "ctrlr_data": { 00:37:58.192 "cntlid": 1, 00:37:58.192 "vendor_id": "0x8086", 00:37:58.192 "model_number": "SPDK bdev Controller", 00:37:58.192 "serial_number": "SPDK0", 00:37:58.192 "firmware_revision": "24.09.1", 00:37:58.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:58.192 "oacs": { 00:37:58.192 "security": 0, 00:37:58.192 "format": 0, 00:37:58.192 "firmware": 0, 00:37:58.192 "ns_manage": 0 00:37:58.192 }, 00:37:58.192 "multi_ctrlr": true, 00:37:58.192 "ana_reporting": false 00:37:58.192 }, 00:37:58.192 "vs": { 00:37:58.192 "nvme_version": "1.3" 00:37:58.192 }, 00:37:58.192 "ns_data": { 00:37:58.192 "id": 1, 00:37:58.192 "can_share": true 00:37:58.192 } 00:37:58.192 } 00:37:58.192 ], 00:37:58.193 "mp_policy": "active_passive" 00:37:58.193 } 00:37:58.193 } 00:37:58.193 ] 00:37:58.193 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:58.193 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1506368 00:37:58.193 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:58.193 Running I/O for 10 seconds... 00:37:59.130 Latency(us) 00:37:59.130 [2024-12-07T09:12:27.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:59.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:59.130 Nvme0n1 : 1.00 21149.00 82.61 0.00 0.00 0.00 0.00 0.00 00:37:59.130 [2024-12-07T09:12:27.856Z] =================================================================================================================== 00:37:59.130 [2024-12-07T09:12:27.856Z] Total : 21149.00 82.61 0.00 0.00 0.00 0.00 0.00 00:37:59.130 00:38:00.068 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u db509c50-edfc-4271-be2b-3568ba882512 00:38:00.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:00.327 Nvme0n1 : 2.00 21298.50 83.20 0.00 0.00 0.00 0.00 0.00 00:38:00.327 [2024-12-07T09:12:29.053Z] =================================================================================================================== 00:38:00.327 [2024-12-07T09:12:29.053Z] Total : 21298.50 83.20 0.00 0.00 0.00 0.00 0.00 00:38:00.327 00:38:00.327 true 00:38:00.327 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db509c50-edfc-4271-be2b-3568ba882512 00:38:00.327 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:00.585 10:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:00.585 10:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:00.585 10:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1506368 00:38:01.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:01.153 Nvme0n1 : 3.00 21351.00 83.40 0.00 0.00 0.00 0.00 0.00 00:38:01.153 [2024-12-07T09:12:29.879Z] =================================================================================================================== 00:38:01.153 [2024-12-07T09:12:29.879Z] Total : 21351.00 83.40 0.00 0.00 0.00 0.00 0.00 00:38:01.153 00:38:02.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:02.532 Nvme0n1 : 4.00 21371.25 83.48 0.00 0.00 0.00 0.00 0.00 00:38:02.532 [2024-12-07T09:12:31.258Z] =================================================================================================================== 00:38:02.532 [2024-12-07T09:12:31.258Z] Total : 21371.25 83.48 0.00 0.00 0.00 0.00 0.00 00:38:02.532 00:38:03.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:03.468 Nvme0n1 : 5.00 21434.60 83.73 0.00 0.00 0.00 0.00 0.00 00:38:03.468 [2024-12-07T09:12:32.194Z] =================================================================================================================== 00:38:03.468 [2024-12-07T09:12:32.194Z] Total : 21434.60 83.73 0.00 0.00 0.00 0.00 0.00 00:38:03.468 00:38:04.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:04.597 Nvme0n1 : 6.00 21479.50 83.90 0.00 0.00 0.00 0.00 0.00 00:38:04.597 [2024-12-07T09:12:33.323Z] =================================================================================================================== 00:38:04.597 [2024-12-07T09:12:33.323Z] Total : 21479.50 83.90 0.00 0.00 0.00 0.00 0.00 00:38:04.597 00:38:05.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.322 Nvme0n1 : 7.00 21505.86 84.01 0.00 0.00 0.00 0.00 0.00 00:38:05.322 [2024-12-07T09:12:34.048Z] =================================================================================================================== 00:38:05.322 [2024-12-07T09:12:34.048Z] Total : 21505.86 84.01 0.00 0.00 0.00 0.00 0.00 00:38:05.322 00:38:06.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:06.273 Nvme0n1 : 8.00 21530.62 84.10 0.00 0.00 0.00 0.00 0.00 00:38:06.273 [2024-12-07T09:12:34.999Z] =================================================================================================================== 00:38:06.273 [2024-12-07T09:12:34.999Z] Total : 21530.62 84.10 0.00 0.00 0.00 0.00 0.00 00:38:06.273 00:38:07.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:07.231 Nvme0n1 : 9.00 21557.89 84.21 0.00 0.00 0.00 0.00 0.00 00:38:07.231 [2024-12-07T09:12:35.957Z] =================================================================================================================== 00:38:07.231 [2024-12-07T09:12:35.957Z] Total : 21557.89 84.21 0.00 0.00 0.00 0.00 0.00 00:38:07.231 00:38:08.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:08.167 Nvme0n1 : 10.00 21579.70 84.30 0.00 0.00 0.00 0.00 0.00 00:38:08.167 [2024-12-07T09:12:36.893Z] =================================================================================================================== 00:38:08.167 [2024-12-07T09:12:36.893Z] Total : 21579.70 84.30 0.00 0.00 0.00 0.00 0.00 00:38:08.167 00:38:08.167 00:38:08.167 Latency(us) 00:38:08.167 [2024-12-07T09:12:36.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:08.167 Nvme0n1 : 10.01 21579.88 84.30 0.00 0.00 5927.23 1567.17 10143.83 00:38:08.167 [2024-12-07T09:12:36.893Z] =================================================================================================================== 00:38:08.167 [2024-12-07T09:12:36.893Z] Total : 21579.88 84.30 0.00 0.00 5927.23 1567.17 10143.83 00:38:08.167 { 00:38:08.167 "results": [ 00:38:08.167 { 00:38:08.167 "job": "Nvme0n1", 00:38:08.167 "core_mask": "0x2", 00:38:08.167 "workload": "randwrite", 00:38:08.167 "status": "finished", 00:38:08.167 "queue_depth": 128, 00:38:08.167 "io_size": 4096, 00:38:08.167 "runtime": 10.00585, 00:38:08.167 "iops": 21579.875772672985, 00:38:08.167 "mibps": 84.29638973700385, 00:38:08.167 "io_failed": 0, 00:38:08.167 "io_timeout": 0, 00:38:08.167 "avg_latency_us": 5927.225488302601, 00:38:08.167 "min_latency_us": 1567.1652173913044, 00:38:08.167 "max_latency_us": 10143.83304347826 00:38:08.167 } 00:38:08.167 ], 00:38:08.167 "core_count": 1 00:38:08.167 } 00:38:08.167 10:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1506143 00:38:08.167 10:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1506143 ']' 00:38:08.167 10:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1506143 00:38:08.167 10:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:38:08.426 10:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:08.426 10:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1506143 00:38:08.426 10:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:08.426 10:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:08.426 10:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1506143' 00:38:08.426 killing process with pid 1506143 00:38:08.426 10:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1506143 00:38:08.426 Received shutdown signal, test time was about 10.000000 seconds 00:38:08.426 00:38:08.427 Latency(us) 00:38:08.427 [2024-12-07T09:12:37.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.427 [2024-12-07T09:12:37.153Z] =================================================================================================================== 00:38:08.427 [2024-12-07T09:12:37.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:08.427 10:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1506143 00:38:08.427 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:08.685 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:08.944 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:08.944 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db509c50-edfc-4271-be2b-3568ba882512 00:38:09.202 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:09.202 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:09.202 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:09.202 [2024-12-07 10:12:37.871903] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:09.203 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db509c50-edfc-4271-be2b-3568ba882512 00:38:09.203 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:38:09.203 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db509c50-edfc-4271-be2b-3568ba882512 00:38:09.203 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:09.203 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.203 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:09.461 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.461 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:09.461 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.461 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:09.461 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:09.461 10:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db509c50-edfc-4271-be2b-3568ba882512 00:38:09.461 request: 00:38:09.461 { 00:38:09.461 "uuid": "db509c50-edfc-4271-be2b-3568ba882512", 00:38:09.461 "method": "bdev_lvol_get_lvstores", 00:38:09.461 "req_id": 1 00:38:09.461 } 00:38:09.461 Got JSON-RPC error response 00:38:09.461 response: 00:38:09.461 { 00:38:09.461 "code": -19, 00:38:09.461 "message": "No such device" 00:38:09.461 } 00:38:09.461 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:38:09.461 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:09.462 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:09.462 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:09.462 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:09.721 aio_bdev 00:38:09.721 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 908e905b-c515-401b-b0a6-2fc381b94c4b 00:38:09.721 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=908e905b-c515-401b-b0a6-2fc381b94c4b 00:38:09.721 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:09.721 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:38:09.721 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:09.721 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:09.721 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:09.981 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 908e905b-c515-401b-b0a6-2fc381b94c4b -t 2000 00:38:09.981 [ 00:38:09.981 { 00:38:09.981 "name": "908e905b-c515-401b-b0a6-2fc381b94c4b", 00:38:09.981 "aliases": [ 00:38:09.981 "lvs/lvol" 00:38:09.981 ], 00:38:09.981 "product_name": "Logical Volume", 00:38:09.981 "block_size": 4096, 00:38:09.981 "num_blocks": 38912, 00:38:09.981 "uuid": "908e905b-c515-401b-b0a6-2fc381b94c4b", 00:38:09.981 "assigned_rate_limits": { 00:38:09.981 "rw_ios_per_sec": 0, 00:38:09.981 "rw_mbytes_per_sec": 0, 00:38:09.981 "r_mbytes_per_sec": 0, 00:38:09.981 "w_mbytes_per_sec": 0 00:38:09.981 }, 00:38:09.981 "claimed": false, 00:38:09.981 "zoned": false, 00:38:09.981 "supported_io_types": { 00:38:09.981 "read": true, 00:38:09.981 "write": true, 00:38:09.981 "unmap": true, 00:38:09.981 "flush": false, 00:38:09.981 "reset": true, 00:38:09.981 "nvme_admin": false, 00:38:09.981 "nvme_io": false, 00:38:09.981 "nvme_io_md": false, 00:38:09.981 "write_zeroes": true, 00:38:09.981 "zcopy": false, 00:38:09.981 "get_zone_info": false, 00:38:09.981 "zone_management": false, 00:38:09.981 "zone_append": false, 00:38:09.981 "compare": false, 00:38:09.981 "compare_and_write": false, 00:38:09.981 "abort": false, 00:38:09.981 "seek_hole": true, 00:38:09.981 "seek_data": true, 00:38:09.981 "copy": false, 00:38:09.981 "nvme_iov_md": false 00:38:09.981 }, 00:38:09.981 "driver_specific": { 00:38:09.981 "lvol": { 00:38:09.981 "lvol_store_uuid": "db509c50-edfc-4271-be2b-3568ba882512", 00:38:09.981 "base_bdev": "aio_bdev", 00:38:09.981 "thin_provision": false, 00:38:09.981 "num_allocated_clusters": 38, 00:38:09.981 "snapshot": false, 00:38:09.981 "clone": false, 00:38:09.981 "esnap_clone": false 00:38:09.981 } 00:38:09.981 } 00:38:09.981 } 00:38:09.981 ] 00:38:09.981 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:38:09.981 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db509c50-edfc-4271-be2b-3568ba882512 00:38:09.981 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:10.240 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:10.240 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db509c50-edfc-4271-be2b-3568ba882512 00:38:10.240 10:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:10.499 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:10.499 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 908e905b-c515-401b-b0a6-2fc381b94c4b 00:38:10.759 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db509c50-edfc-4271-be2b-3568ba882512 00:38:10.759 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:11.018 00:38:11.018 real 0m15.550s 00:38:11.018 user 0m15.095s 00:38:11.018 sys 0m1.397s 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:11.018 ************************************ 00:38:11.018 END TEST lvs_grow_clean 00:38:11.018 ************************************ 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:11.018 ************************************ 00:38:11.018 START TEST lvs_grow_dirty 00:38:11.018 ************************************ 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:11.018 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:11.277 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:11.277 10:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:11.535 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:11.536 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:11.536 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:11.794 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:11.794 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:11.794 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a39c3f2a-516f-4074-9f85-ef57600dc0cf lvol 150 00:38:11.794 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=dcaddfef-3a19-434b-9610-cb2fa543fe80 00:38:11.794 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:11.794 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:12.053 [2024-12-07 10:12:40.683932] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:12.053 [2024-12-07 10:12:40.684092] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:12.053 true 00:38:12.053 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:12.053 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:12.312 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:12.313 10:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:12.572 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dcaddfef-3a19-434b-9610-cb2fa543fe80 00:38:12.572 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:12.832 [2024-12-07 10:12:41.420121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:12.832 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:13.092 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1508731 00:38:13.092 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:13.092 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:13.092 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1508731 /var/tmp/bdevperf.sock 00:38:13.092 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1508731 ']' 00:38:13.092 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:13.092 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:13.092 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:13.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:13.092 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:13.092 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:13.092 [2024-12-07 10:12:41.644224] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:38:13.092 [2024-12-07 10:12:41.644275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508731 ] 00:38:13.092 [2024-12-07 10:12:41.698720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.092 [2024-12-07 10:12:41.740474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.352 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:13.352 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:13.352 10:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:13.610 Nvme0n1 00:38:13.610 10:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:13.868 [ 00:38:13.868 { 00:38:13.868 "name": "Nvme0n1", 00:38:13.868 "aliases": [ 00:38:13.868 "dcaddfef-3a19-434b-9610-cb2fa543fe80" 00:38:13.868 ], 00:38:13.868 "product_name": "NVMe disk", 00:38:13.868 "block_size": 4096, 00:38:13.868 "num_blocks": 38912, 00:38:13.868 "uuid": "dcaddfef-3a19-434b-9610-cb2fa543fe80", 00:38:13.868 "numa_id": 1, 00:38:13.868 "assigned_rate_limits": { 00:38:13.868 "rw_ios_per_sec": 0, 00:38:13.868 "rw_mbytes_per_sec": 0, 00:38:13.868 "r_mbytes_per_sec": 0, 00:38:13.868 "w_mbytes_per_sec": 0 00:38:13.868 }, 00:38:13.868 "claimed": false, 00:38:13.868 "zoned": false, 00:38:13.868 "supported_io_types": { 00:38:13.868 "read": true, 00:38:13.868 "write": true, 00:38:13.868 "unmap": true, 00:38:13.868 "flush": true, 00:38:13.868 "reset": true, 00:38:13.868 "nvme_admin": true, 00:38:13.868 "nvme_io": true, 00:38:13.868 "nvme_io_md": false, 00:38:13.868 "write_zeroes": true, 00:38:13.868 "zcopy": false, 00:38:13.868 "get_zone_info": false, 00:38:13.868 "zone_management": false, 00:38:13.868 "zone_append": false, 00:38:13.868 "compare": true, 00:38:13.868 "compare_and_write": true, 00:38:13.868 "abort": true, 00:38:13.868 "seek_hole": false, 00:38:13.868 "seek_data": false, 00:38:13.868 "copy": true, 00:38:13.868 "nvme_iov_md": false 00:38:13.868 }, 00:38:13.868 "memory_domains": [ 00:38:13.868 { 00:38:13.868 "dma_device_id": "system", 00:38:13.868 "dma_device_type": 1 00:38:13.868 } 00:38:13.868 ], 00:38:13.868 "driver_specific": { 00:38:13.868 "nvme": [ 00:38:13.868 { 00:38:13.868 "trid": { 00:38:13.868 "trtype": "TCP", 00:38:13.868 "adrfam": "IPv4", 00:38:13.869 "traddr": "10.0.0.2", 00:38:13.869 "trsvcid": "4420", 00:38:13.869 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:13.869 }, 00:38:13.869 "ctrlr_data": { 00:38:13.869 "cntlid": 1, 00:38:13.869 "vendor_id": "0x8086", 00:38:13.869 "model_number": "SPDK bdev Controller", 00:38:13.869 "serial_number": "SPDK0", 00:38:13.869 "firmware_revision": "24.09.1", 00:38:13.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:13.869 "oacs": { 00:38:13.869 "security": 0, 00:38:13.869 "format": 0, 00:38:13.869 "firmware": 0, 00:38:13.869 "ns_manage": 0 00:38:13.869 }, 00:38:13.869 "multi_ctrlr": true, 00:38:13.869 "ana_reporting": false 00:38:13.869 }, 00:38:13.869 "vs": { 00:38:13.869 "nvme_version": "1.3" 00:38:13.869 }, 00:38:13.869 "ns_data": { 00:38:13.869 "id": 1, 00:38:13.869 "can_share": true 00:38:13.869 } 00:38:13.869 } 00:38:13.869 ], 00:38:13.869 "mp_policy": "active_passive" 00:38:13.869 } 00:38:13.869 } 00:38:13.869 ] 00:38:13.869 10:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:13.869 10:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1508856 00:38:13.869 10:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:13.869 Running I/O for 10 seconds... 00:38:14.803 Latency(us) 00:38:14.803 [2024-12-07T09:12:43.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.803 Nvme0n1 : 1.00 22266.00 86.98 0.00 0.00 0.00 0.00 0.00 00:38:14.803 [2024-12-07T09:12:43.529Z] =================================================================================================================== 00:38:14.803 [2024-12-07T09:12:43.529Z] Total : 22266.00 86.98 0.00 0.00 0.00 0.00 0.00 00:38:14.803 00:38:15.739 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:15.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:15.997 Nvme0n1 : 2.00 22356.00 87.33 0.00 0.00 0.00 0.00 0.00 00:38:15.997 [2024-12-07T09:12:44.723Z] =================================================================================================================== 00:38:15.997 [2024-12-07T09:12:44.723Z] Total : 22356.00 87.33 0.00 0.00 0.00 0.00 0.00 00:38:15.997 00:38:15.997 true 00:38:15.997 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:15.997 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:16.255 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:16.255 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:16.255 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1508856 00:38:16.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:16.822 Nvme0n1 : 3.00 22417.67 87.57 0.00 0.00 0.00 0.00 0.00 00:38:16.822 [2024-12-07T09:12:45.548Z] =================================================================================================================== 00:38:16.822 [2024-12-07T09:12:45.548Z] Total : 22417.67 87.57 0.00 0.00 0.00 0.00 0.00 00:38:16.822 00:38:18.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:18.199 Nvme0n1 : 4.00 22517.75 87.96 0.00 0.00 0.00 0.00 0.00 00:38:18.200 [2024-12-07T09:12:46.926Z] =================================================================================================================== 00:38:18.200 [2024-12-07T09:12:46.926Z] Total : 22517.75 87.96 0.00 0.00 0.00 0.00 0.00 00:38:18.200 00:38:19.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:19.137 Nvme0n1 : 5.00 22532.80 88.02 0.00 0.00 0.00 0.00 0.00 00:38:19.137 [2024-12-07T09:12:47.863Z] =================================================================================================================== 00:38:19.137 [2024-12-07T09:12:47.863Z] Total : 22532.80 88.02 0.00 0.00 0.00 0.00 0.00 00:38:19.137 00:38:20.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:20.075 Nvme0n1 : 6.00 22576.83 88.19 0.00 0.00 0.00 0.00 0.00 00:38:20.075 [2024-12-07T09:12:48.801Z] =================================================================================================================== 00:38:20.075 [2024-12-07T09:12:48.801Z] Total : 22576.83 88.19 0.00 0.00 0.00 0.00 0.00 00:38:20.075 00:38:21.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:21.012 Nvme0n1 : 7.00 22602.57 88.29 0.00 0.00 0.00 0.00 0.00 00:38:21.012 [2024-12-07T09:12:49.738Z] =================================================================================================================== 00:38:21.012 [2024-12-07T09:12:49.738Z] Total : 22602.57 88.29 0.00 0.00 0.00 0.00 0.00 00:38:21.012 00:38:21.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:21.949 Nvme0n1 : 8.00 22585.12 88.22 0.00 0.00 0.00 0.00 0.00 00:38:21.949 [2024-12-07T09:12:50.675Z] =================================================================================================================== 00:38:21.949 [2024-12-07T09:12:50.675Z] Total : 22585.12 88.22 0.00 0.00 0.00 0.00 0.00 00:38:21.949 00:38:22.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.885 Nvme0n1 : 9.00 22600.22 88.28 0.00 0.00 0.00 0.00 0.00 00:38:22.885 [2024-12-07T09:12:51.611Z] =================================================================================================================== 00:38:22.885 [2024-12-07T09:12:51.611Z] Total : 22600.22 88.28 0.00 0.00 0.00 0.00 0.00 00:38:22.885 00:38:23.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.825 Nvme0n1 : 10.00 22612.10 88.33 0.00 0.00 0.00 0.00 0.00 00:38:23.825 [2024-12-07T09:12:52.551Z] =================================================================================================================== 00:38:23.825 [2024-12-07T09:12:52.551Z] Total : 22612.10 88.33 0.00 0.00 0.00 0.00 0.00 00:38:23.825 00:38:23.825 00:38:23.825 Latency(us) 00:38:23.825 [2024-12-07T09:12:52.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.826 Nvme0n1 : 10.01 22612.84 88.33 0.00 0.00 5657.01 3462.01 15158.76 00:38:23.826 [2024-12-07T09:12:52.552Z] =================================================================================================================== 00:38:23.826 [2024-12-07T09:12:52.552Z] Total : 22612.84 88.33 0.00 0.00 5657.01 3462.01 15158.76 00:38:23.826 { 00:38:23.826 "results": [ 00:38:23.826 { 00:38:23.826 "job": "Nvme0n1", 00:38:23.826 "core_mask": "0x2", 00:38:23.826 "workload": "randwrite", 00:38:23.826 "status": "finished", 00:38:23.826 "queue_depth": 128, 00:38:23.826 "io_size": 4096, 00:38:23.826 "runtime": 10.005332, 00:38:23.826 "iops": 22612.84283220187, 00:38:23.826 "mibps": 88.33141731328855, 00:38:23.826 "io_failed": 0, 00:38:23.826 "io_timeout": 0, 00:38:23.826 "avg_latency_us": 5657.009245565726, 00:38:23.826 "min_latency_us": 3462.0104347826086, 00:38:23.826 "max_latency_us": 15158.761739130436 00:38:23.826 } 00:38:23.826 ], 00:38:23.826 "core_count": 1 00:38:23.826 } 00:38:23.826 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1508731 00:38:23.826 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1508731 ']' 00:38:23.826 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1508731 00:38:23.826 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:38:23.826 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:23.826 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1508731 00:38:24.085 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:24.085 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:24.085 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1508731' 00:38:24.085 killing process with pid 1508731 00:38:24.085 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1508731 00:38:24.085 Received shutdown signal, test time was about 10.000000 seconds 00:38:24.085 00:38:24.085 Latency(us) 00:38:24.085 [2024-12-07T09:12:52.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.085 [2024-12-07T09:12:52.811Z] =================================================================================================================== 00:38:24.085 [2024-12-07T09:12:52.811Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:24.085 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1508731 00:38:24.085 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:24.344 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:24.602 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:24.602 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:24.602 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:24.602 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:24.602 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1505810 00:38:24.602 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1505810 00:38:24.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1505810 Killed "${NVMF_APP[@]}" "$@" 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=1510574 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 1510574 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1510574 ']' 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:24.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:24.860 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:24.860 [2024-12-07 10:12:53.401500] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:24.860 [2024-12-07 10:12:53.402428] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:38:24.860 [2024-12-07 10:12:53.402465] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:24.860 [2024-12-07 10:12:53.460767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.860 [2024-12-07 10:12:53.501258] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:24.860 [2024-12-07 10:12:53.501298] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:24.860 [2024-12-07 10:12:53.501305] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:24.860 [2024-12-07 10:12:53.501311] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:24.860 [2024-12-07 10:12:53.501316] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:24.860 [2024-12-07 10:12:53.501335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.860 [2024-12-07 10:12:53.563441] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:24.860 [2024-12-07 10:12:53.563694] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:25.119 [2024-12-07 10:12:53.792504] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:25.119 [2024-12-07 10:12:53.792612] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:25.119 [2024-12-07 10:12:53.792650] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev dcaddfef-3a19-434b-9610-cb2fa543fe80 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=dcaddfef-3a19-434b-9610-cb2fa543fe80 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:25.119 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:25.378 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dcaddfef-3a19-434b-9610-cb2fa543fe80 -t 2000 00:38:25.637 [ 00:38:25.637 { 00:38:25.637 "name": "dcaddfef-3a19-434b-9610-cb2fa543fe80", 00:38:25.637 "aliases": [ 00:38:25.637 "lvs/lvol" 00:38:25.637 ], 00:38:25.637 "product_name": "Logical Volume", 00:38:25.637 "block_size": 4096, 00:38:25.637 "num_blocks": 38912, 00:38:25.637 "uuid": "dcaddfef-3a19-434b-9610-cb2fa543fe80", 00:38:25.637 "assigned_rate_limits": { 00:38:25.637 "rw_ios_per_sec": 0, 00:38:25.637 "rw_mbytes_per_sec": 0, 00:38:25.637 "r_mbytes_per_sec": 0, 00:38:25.637 "w_mbytes_per_sec": 0 00:38:25.637 }, 00:38:25.637 "claimed": false, 00:38:25.637 "zoned": false, 00:38:25.637 "supported_io_types": { 00:38:25.637 "read": true, 00:38:25.637 "write": true, 00:38:25.637 "unmap": true, 00:38:25.637 "flush": false, 00:38:25.637 "reset": true, 00:38:25.637 "nvme_admin": false, 00:38:25.637 "nvme_io": false, 00:38:25.637 "nvme_io_md": false, 00:38:25.637 "write_zeroes": true, 00:38:25.637 "zcopy": false, 00:38:25.637 "get_zone_info": false, 00:38:25.637 "zone_management": false, 00:38:25.637 "zone_append": false, 00:38:25.637 "compare": false, 00:38:25.637 "compare_and_write": false, 00:38:25.637 "abort": false, 00:38:25.637 "seek_hole": true, 00:38:25.637 "seek_data": true, 00:38:25.637 "copy": false, 00:38:25.637 "nvme_iov_md": false 00:38:25.637 }, 00:38:25.637 "driver_specific": { 00:38:25.637 "lvol": { 00:38:25.637 "lvol_store_uuid": "a39c3f2a-516f-4074-9f85-ef57600dc0cf", 00:38:25.637 "base_bdev": "aio_bdev", 00:38:25.637 "thin_provision": false, 00:38:25.637 "num_allocated_clusters": 38, 00:38:25.637 "snapshot": false, 00:38:25.637 "clone": false, 00:38:25.637 "esnap_clone": false 00:38:25.637 } 00:38:25.637 } 00:38:25.637 } 00:38:25.637 ] 00:38:25.637 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:25.637 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:25.637 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:25.896 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:25.896 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:25.896 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:25.896 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:25.896 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:26.155 [2024-12-07 10:12:54.741694] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:26.155 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:26.155 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:38:26.155 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:26.155 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.155 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:26.155 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.155 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:26.155 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.155 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:26.155 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.155 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:26.155 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:26.414 request: 00:38:26.414 { 00:38:26.414 "uuid": "a39c3f2a-516f-4074-9f85-ef57600dc0cf", 00:38:26.414 "method": "bdev_lvol_get_lvstores", 00:38:26.414 "req_id": 1 00:38:26.414 } 00:38:26.414 Got JSON-RPC error response 00:38:26.414 response: 00:38:26.414 { 00:38:26.414 "code": -19, 00:38:26.414 "message": "No such device" 00:38:26.414 } 00:38:26.414 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:38:26.414 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:26.414 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:26.414 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:26.414 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:26.672 aio_bdev 00:38:26.672 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dcaddfef-3a19-434b-9610-cb2fa543fe80 00:38:26.672 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=dcaddfef-3a19-434b-9610-cb2fa543fe80 00:38:26.672 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:26.672 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:26.672 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:26.672 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:26.672 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:26.672 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dcaddfef-3a19-434b-9610-cb2fa543fe80 -t 2000 00:38:26.930 [ 00:38:26.930 { 00:38:26.930 "name": "dcaddfef-3a19-434b-9610-cb2fa543fe80", 00:38:26.930 "aliases": [ 00:38:26.930 "lvs/lvol" 00:38:26.930 ], 00:38:26.930 "product_name": "Logical Volume", 00:38:26.930 "block_size": 4096, 00:38:26.930 "num_blocks": 38912, 00:38:26.930 "uuid": "dcaddfef-3a19-434b-9610-cb2fa543fe80", 00:38:26.930 "assigned_rate_limits": { 00:38:26.930 "rw_ios_per_sec": 0, 00:38:26.930 "rw_mbytes_per_sec": 0, 00:38:26.930 "r_mbytes_per_sec": 0, 00:38:26.930 "w_mbytes_per_sec": 0 00:38:26.930 }, 00:38:26.930 "claimed": false, 00:38:26.930 "zoned": false, 00:38:26.930 "supported_io_types": { 00:38:26.930 "read": true, 00:38:26.930 "write": true, 00:38:26.930 "unmap": true, 00:38:26.931 "flush": false, 00:38:26.931 "reset": true, 00:38:26.931 "nvme_admin": false, 00:38:26.931 "nvme_io": false, 00:38:26.931 "nvme_io_md": false, 00:38:26.931 "write_zeroes": true, 00:38:26.931 "zcopy": false, 00:38:26.931 "get_zone_info": false, 00:38:26.931 "zone_management": false, 00:38:26.931 "zone_append": false, 00:38:26.931 "compare": false, 00:38:26.931 "compare_and_write": false, 00:38:26.931 "abort": false, 00:38:26.931 "seek_hole": true, 00:38:26.931 "seek_data": true, 00:38:26.931 "copy": false, 00:38:26.931 "nvme_iov_md": false 00:38:26.931 }, 00:38:26.931 "driver_specific": { 00:38:26.931 "lvol": { 00:38:26.931 "lvol_store_uuid": "a39c3f2a-516f-4074-9f85-ef57600dc0cf", 00:38:26.931 "base_bdev": "aio_bdev", 00:38:26.931 "thin_provision": false, 00:38:26.931 "num_allocated_clusters": 38, 00:38:26.931 "snapshot": false, 00:38:26.931 "clone": false, 00:38:26.931 "esnap_clone": false 00:38:26.931 } 00:38:26.931 } 00:38:26.931 } 00:38:26.931 ] 00:38:26.931 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:26.931 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:26.931 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:27.189 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:27.189 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:27.189 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:27.189 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:27.189 10:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dcaddfef-3a19-434b-9610-cb2fa543fe80 00:38:27.448 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a39c3f2a-516f-4074-9f85-ef57600dc0cf 00:38:27.706 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:27.964 00:38:27.964 real 0m16.788s 00:38:27.964 user 0m33.914s 00:38:27.964 sys 0m4.019s 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:27.964 ************************************ 00:38:27.964 END TEST lvs_grow_dirty 00:38:27.964 ************************************ 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:27.964 nvmf_trace.0 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:27.964 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:27.964 rmmod nvme_tcp 00:38:27.964 rmmod nvme_fabrics 00:38:27.964 rmmod nvme_keyring 00:38:27.965 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:27.965 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:27.965 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:27.965 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 1510574 ']' 00:38:27.965 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 1510574 00:38:27.965 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1510574 ']' 00:38:27.965 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1510574 00:38:27.965 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:38:27.965 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:27.965 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1510574 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1510574' 00:38:28.222 killing process with pid 1510574 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1510574 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1510574 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.222 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:30.749 10:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:30.749 00:38:30.749 real 0m40.666s 00:38:30.749 user 0m51.143s 00:38:30.749 sys 0m9.782s 00:38:30.749 10:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:30.749 10:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:30.749 ************************************ 00:38:30.749 END TEST nvmf_lvs_grow 00:38:30.749 ************************************ 00:38:30.749 10:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:30.749 10:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:30.749 10:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:30.749 10:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:30.749 ************************************ 00:38:30.749 START TEST nvmf_bdev_io_wait 00:38:30.749 ************************************ 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:30.749 * Looking for test storage... 00:38:30.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:30.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.749 --rc genhtml_branch_coverage=1 00:38:30.749 --rc genhtml_function_coverage=1 00:38:30.749 --rc genhtml_legend=1 00:38:30.749 --rc geninfo_all_blocks=1 00:38:30.749 --rc geninfo_unexecuted_blocks=1 00:38:30.749 00:38:30.749 ' 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:30.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.749 --rc genhtml_branch_coverage=1 00:38:30.749 --rc genhtml_function_coverage=1 00:38:30.749 --rc genhtml_legend=1 00:38:30.749 --rc geninfo_all_blocks=1 00:38:30.749 --rc geninfo_unexecuted_blocks=1 00:38:30.749 00:38:30.749 ' 00:38:30.749 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:30.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.750 --rc genhtml_branch_coverage=1 00:38:30.750 --rc genhtml_function_coverage=1 00:38:30.750 --rc genhtml_legend=1 00:38:30.750 --rc geninfo_all_blocks=1 00:38:30.750 --rc geninfo_unexecuted_blocks=1 00:38:30.750 00:38:30.750 ' 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:30.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.750 --rc genhtml_branch_coverage=1 00:38:30.750 --rc genhtml_function_coverage=1 00:38:30.750 --rc genhtml_legend=1 00:38:30.750 --rc geninfo_all_blocks=1 00:38:30.750 --rc geninfo_unexecuted_blocks=1 00:38:30.750 00:38:30.750 ' 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:30.750 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:36.016 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:36.017 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:36.017 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:36.017 Found net devices under 0000:86:00.0: cvl_0_0 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:36.017 Found net devices under 0000:86:00.1: cvl_0_1 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:36.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:36.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:38:36.017 00:38:36.017 --- 10.0.0.2 ping statistics --- 00:38:36.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.017 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:36.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:36.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:38:36.017 00:38:36.017 --- 10.0.0.1 ping statistics --- 00:38:36.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.017 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=1514619 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 1514619 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1514619 ']' 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:36.017 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:36.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:36.018 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:36.018 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:36.018 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.018 [2024-12-07 10:13:04.669124] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:36.018 [2024-12-07 10:13:04.670100] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:38:36.018 [2024-12-07 10:13:04.670136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:36.018 [2024-12-07 10:13:04.729107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:36.278 [2024-12-07 10:13:04.772891] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:36.278 [2024-12-07 10:13:04.772931] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:36.278 [2024-12-07 10:13:04.772938] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:36.278 [2024-12-07 10:13:04.772944] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:36.278 [2024-12-07 10:13:04.772970] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:36.278 [2024-12-07 10:13:04.773013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:36.278 [2024-12-07 10:13:04.773136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:36.278 [2024-12-07 10:13:04.773198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:36.278 [2024-12-07 10:13:04.773199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.278 [2024-12-07 10:13:04.773497] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.278 [2024-12-07 10:13:04.912169] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:36.278 [2024-12-07 10:13:04.912211] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:36.278 [2024-12-07 10:13:04.912659] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:36.278 [2024-12-07 10:13:04.913114] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.278 [2024-12-07 10:13:04.917865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.278 Malloc0 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.278 [2024-12-07 10:13:04.973846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1514647 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1514650 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1514652 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:36.278 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:36.278 { 00:38:36.278 "params": { 00:38:36.278 "name": "Nvme$subsystem", 00:38:36.278 "trtype": "$TEST_TRANSPORT", 00:38:36.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:36.278 "adrfam": "ipv4", 00:38:36.278 "trsvcid": "$NVMF_PORT", 00:38:36.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:36.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:36.278 "hdgst": ${hdgst:-false}, 00:38:36.278 "ddgst": ${ddgst:-false} 00:38:36.278 }, 00:38:36.278 "method": "bdev_nvme_attach_controller" 00:38:36.278 } 00:38:36.278 EOF 00:38:36.278 )") 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1514654 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:36.279 { 00:38:36.279 "params": { 00:38:36.279 "name": "Nvme$subsystem", 00:38:36.279 "trtype": "$TEST_TRANSPORT", 00:38:36.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:36.279 "adrfam": "ipv4", 00:38:36.279 "trsvcid": "$NVMF_PORT", 00:38:36.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:36.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:36.279 "hdgst": ${hdgst:-false}, 00:38:36.279 "ddgst": ${ddgst:-false} 00:38:36.279 }, 00:38:36.279 "method": "bdev_nvme_attach_controller" 00:38:36.279 } 00:38:36.279 EOF 00:38:36.279 )") 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:36.279 { 00:38:36.279 "params": { 00:38:36.279 "name": "Nvme$subsystem", 00:38:36.279 "trtype": "$TEST_TRANSPORT", 00:38:36.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:36.279 "adrfam": "ipv4", 00:38:36.279 "trsvcid": "$NVMF_PORT", 00:38:36.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:36.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:36.279 "hdgst": ${hdgst:-false}, 00:38:36.279 "ddgst": ${ddgst:-false} 00:38:36.279 }, 00:38:36.279 "method": "bdev_nvme_attach_controller" 00:38:36.279 } 00:38:36.279 EOF 00:38:36.279 )") 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:36.279 { 00:38:36.279 "params": { 00:38:36.279 "name": "Nvme$subsystem", 00:38:36.279 "trtype": "$TEST_TRANSPORT", 00:38:36.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:36.279 "adrfam": "ipv4", 00:38:36.279 "trsvcid": "$NVMF_PORT", 00:38:36.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:36.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:36.279 "hdgst": ${hdgst:-false}, 00:38:36.279 "ddgst": ${ddgst:-false} 00:38:36.279 }, 00:38:36.279 "method": "bdev_nvme_attach_controller" 00:38:36.279 } 00:38:36.279 EOF 00:38:36.279 )") 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1514647 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:36.279 "params": { 00:38:36.279 "name": "Nvme1", 00:38:36.279 "trtype": "tcp", 00:38:36.279 "traddr": "10.0.0.2", 00:38:36.279 "adrfam": "ipv4", 00:38:36.279 "trsvcid": "4420", 00:38:36.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:36.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:36.279 "hdgst": false, 00:38:36.279 "ddgst": false 00:38:36.279 }, 00:38:36.279 "method": "bdev_nvme_attach_controller" 00:38:36.279 }' 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:36.279 "params": { 00:38:36.279 "name": "Nvme1", 00:38:36.279 "trtype": "tcp", 00:38:36.279 "traddr": "10.0.0.2", 00:38:36.279 "adrfam": "ipv4", 00:38:36.279 "trsvcid": "4420", 00:38:36.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:36.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:36.279 "hdgst": false, 00:38:36.279 "ddgst": false 00:38:36.279 }, 00:38:36.279 "method": "bdev_nvme_attach_controller" 00:38:36.279 }' 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:36.279 "params": { 00:38:36.279 "name": "Nvme1", 00:38:36.279 "trtype": "tcp", 00:38:36.279 "traddr": "10.0.0.2", 00:38:36.279 "adrfam": "ipv4", 00:38:36.279 "trsvcid": "4420", 00:38:36.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:36.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:36.279 "hdgst": false, 00:38:36.279 "ddgst": false 00:38:36.279 }, 00:38:36.279 "method": "bdev_nvme_attach_controller" 00:38:36.279 }' 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:36.279 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:36.279 "params": { 00:38:36.279 "name": "Nvme1", 00:38:36.279 "trtype": "tcp", 00:38:36.279 "traddr": "10.0.0.2", 00:38:36.279 "adrfam": "ipv4", 00:38:36.279 "trsvcid": "4420", 00:38:36.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:36.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:36.279 "hdgst": false, 00:38:36.279 "ddgst": false 00:38:36.279 }, 00:38:36.279 "method": "bdev_nvme_attach_controller" 00:38:36.279 }' 00:38:36.551 [2024-12-07 10:13:05.023565] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:38:36.551 [2024-12-07 10:13:05.023562] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:38:36.551 [2024-12-07 10:13:05.023617] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-07 10:13:05.023618] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:36.551 --proc-type=auto ] 00:38:36.551 [2024-12-07 10:13:05.024466] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:38:36.551 [2024-12-07 10:13:05.024511] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:36.551 [2024-12-07 10:13:05.026578] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:38:36.551 [2024-12-07 10:13:05.026621] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:36.551 [2024-12-07 10:13:05.211213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.551 [2024-12-07 10:13:05.239015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:38:36.814 [2024-12-07 10:13:05.303685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.814 [2024-12-07 10:13:05.331308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:38:36.814 [2024-12-07 10:13:05.403021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.814 [2024-12-07 10:13:05.436099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:38:36.814 [2024-12-07 10:13:05.453924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.814 [2024-12-07 10:13:05.481748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:38:37.072 Running I/O for 1 seconds... 00:38:37.331 Running I/O for 1 seconds... 00:38:37.331 Running I/O for 1 seconds... 00:38:37.588 Running I/O for 1 seconds... 00:38:38.206 11856.00 IOPS, 46.31 MiB/s 00:38:38.206 Latency(us) 00:38:38.206 [2024-12-07T09:13:06.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.206 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:38.206 Nvme1n1 : 1.01 11889.07 46.44 0.00 0.00 10715.26 5299.87 13050.21 00:38:38.206 [2024-12-07T09:13:06.932Z] =================================================================================================================== 00:38:38.206 [2024-12-07T09:13:06.932Z] Total : 11889.07 46.44 0.00 0.00 10715.26 5299.87 13050.21 00:38:38.206 9777.00 IOPS, 38.19 MiB/s 00:38:38.206 Latency(us) 00:38:38.206 [2024-12-07T09:13:06.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.206 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:38.206 Nvme1n1 : 1.01 9848.78 38.47 0.00 0.00 12950.95 1702.51 16526.47 00:38:38.206 [2024-12-07T09:13:06.933Z] =================================================================================================================== 00:38:38.207 [2024-12-07T09:13:06.933Z] Total : 9848.78 38.47 0.00 0.00 12950.95 1702.51 16526.47 00:38:38.207 10719.00 IOPS, 41.87 MiB/s 00:38:38.207 Latency(us) 00:38:38.207 [2024-12-07T09:13:06.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.207 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:38.207 Nvme1n1 : 1.00 10806.26 42.21 0.00 0.00 11818.67 2194.03 17666.23 00:38:38.207 [2024-12-07T09:13:06.933Z] =================================================================================================================== 00:38:38.207 [2024-12-07T09:13:06.933Z] Total : 10806.26 42.21 0.00 0.00 11818.67 2194.03 17666.23 00:38:38.466 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1514650 00:38:38.466 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1514652 00:38:38.466 246328.00 IOPS, 962.22 MiB/s 00:38:38.466 Latency(us) 00:38:38.466 [2024-12-07T09:13:07.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.466 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:38.466 Nvme1n1 : 1.00 245944.70 960.72 0.00 0.00 517.59 235.07 1545.79 00:38:38.466 [2024-12-07T09:13:07.192Z] =================================================================================================================== 00:38:38.466 [2024-12-07T09:13:07.192Z] Total : 245944.70 960.72 0.00 0.00 517.59 235.07 1545.79 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1514654 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:38.724 rmmod nvme_tcp 00:38:38.724 rmmod nvme_fabrics 00:38:38.724 rmmod nvme_keyring 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 1514619 ']' 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 1514619 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1514619 ']' 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1514619 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:38.724 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1514619 00:38:38.725 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:38.725 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:38.725 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1514619' 00:38:38.725 killing process with pid 1514619 00:38:38.725 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1514619 00:38:38.725 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1514619 00:38:38.984 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:38.984 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:38.984 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:38.984 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:38.984 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:38.984 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:38:38.984 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:38:38.984 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:38.984 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:38.984 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.984 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.984 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:41.519 00:38:41.519 real 0m10.643s 00:38:41.519 user 0m16.454s 00:38:41.519 sys 0m6.591s 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:41.519 ************************************ 00:38:41.519 END TEST nvmf_bdev_io_wait 00:38:41.519 ************************************ 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:41.519 ************************************ 00:38:41.519 START TEST nvmf_queue_depth 00:38:41.519 ************************************ 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:41.519 * Looking for test storage... 00:38:41.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:41.519 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:41.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.520 --rc genhtml_branch_coverage=1 00:38:41.520 --rc genhtml_function_coverage=1 00:38:41.520 --rc genhtml_legend=1 00:38:41.520 --rc geninfo_all_blocks=1 00:38:41.520 --rc geninfo_unexecuted_blocks=1 00:38:41.520 00:38:41.520 ' 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:41.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.520 --rc genhtml_branch_coverage=1 00:38:41.520 --rc genhtml_function_coverage=1 00:38:41.520 --rc genhtml_legend=1 00:38:41.520 --rc geninfo_all_blocks=1 00:38:41.520 --rc geninfo_unexecuted_blocks=1 00:38:41.520 00:38:41.520 ' 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:41.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.520 --rc genhtml_branch_coverage=1 00:38:41.520 --rc genhtml_function_coverage=1 00:38:41.520 --rc genhtml_legend=1 00:38:41.520 --rc geninfo_all_blocks=1 00:38:41.520 --rc geninfo_unexecuted_blocks=1 00:38:41.520 00:38:41.520 ' 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:41.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.520 --rc genhtml_branch_coverage=1 00:38:41.520 --rc genhtml_function_coverage=1 00:38:41.520 --rc genhtml_legend=1 00:38:41.520 --rc geninfo_all_blocks=1 00:38:41.520 --rc geninfo_unexecuted_blocks=1 00:38:41.520 00:38:41.520 ' 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:41.520 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:41.521 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:41.521 10:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:46.789 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:46.789 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:46.789 Found net devices under 0000:86:00.0: cvl_0_0 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.789 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:46.789 Found net devices under 0000:86:00.1: cvl_0_1 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:46.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:46.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:38:46.790 00:38:46.790 --- 10.0.0.2 ping statistics --- 00:38:46.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.790 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:46.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:46.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:38:46.790 00:38:46.790 --- 10.0.0.1 ping statistics --- 00:38:46.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.790 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=1518433 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 1518433 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1518433 ']' 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:46.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:46.790 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:46.790 [2024-12-07 10:13:15.389443] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:46.790 [2024-12-07 10:13:15.390403] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:38:46.790 [2024-12-07 10:13:15.390436] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:46.790 [2024-12-07 10:13:15.451885] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.790 [2024-12-07 10:13:15.491482] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:46.790 [2024-12-07 10:13:15.491525] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:46.790 [2024-12-07 10:13:15.491536] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:46.790 [2024-12-07 10:13:15.491542] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:46.790 [2024-12-07 10:13:15.491563] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:46.790 [2024-12-07 10:13:15.491584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:47.047 [2024-12-07 10:13:15.552732] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:47.047 [2024-12-07 10:13:15.552958] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:47.047 [2024-12-07 10:13:15.612002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:47.047 Malloc0 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:47.047 [2024-12-07 10:13:15.672127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1518520 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1518520 /var/tmp/bdevperf.sock 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1518520 ']' 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:47.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:47.047 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:47.047 [2024-12-07 10:13:15.721512] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:38:47.047 [2024-12-07 10:13:15.721555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518520 ] 00:38:47.305 [2024-12-07 10:13:15.775979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.305 [2024-12-07 10:13:15.816290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.305 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:47.305 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:47.305 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:47.305 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.305 10:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:47.563 NVMe0n1 00:38:47.563 10:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.563 10:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:47.563 Running I/O for 10 seconds... 00:38:49.873 11440.00 IOPS, 44.69 MiB/s [2024-12-07T09:13:19.536Z] 11786.00 IOPS, 46.04 MiB/s [2024-12-07T09:13:20.473Z] 11957.00 IOPS, 46.71 MiB/s [2024-12-07T09:13:21.409Z] 12031.50 IOPS, 47.00 MiB/s [2024-12-07T09:13:22.346Z] 12083.80 IOPS, 47.20 MiB/s [2024-12-07T09:13:23.282Z] 12118.50 IOPS, 47.34 MiB/s [2024-12-07T09:13:24.219Z] 12151.43 IOPS, 47.47 MiB/s [2024-12-07T09:13:25.598Z] 12170.12 IOPS, 47.54 MiB/s [2024-12-07T09:13:26.530Z] 12187.22 IOPS, 47.61 MiB/s [2024-12-07T09:13:26.530Z] 12200.80 IOPS, 47.66 MiB/s 00:38:57.804 Latency(us) 00:38:57.804 [2024-12-07T09:13:26.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:57.804 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:57.804 Verification LBA range: start 0x0 length 0x4000 00:38:57.804 NVMe0n1 : 10.05 12242.34 47.82 0.00 0.00 83373.98 10827.69 52656.75 00:38:57.804 [2024-12-07T09:13:26.530Z] =================================================================================================================== 00:38:57.804 [2024-12-07T09:13:26.530Z] Total : 12242.34 47.82 0.00 0.00 83373.98 10827.69 52656.75 00:38:57.804 { 00:38:57.804 "results": [ 00:38:57.804 { 00:38:57.804 "job": "NVMe0n1", 00:38:57.804 "core_mask": "0x1", 00:38:57.804 "workload": "verify", 00:38:57.804 "status": "finished", 00:38:57.804 "verify_range": { 00:38:57.804 "start": 0, 00:38:57.804 "length": 16384 00:38:57.804 }, 00:38:57.804 "queue_depth": 1024, 00:38:57.804 "io_size": 4096, 00:38:57.804 "runtime": 10.049711, 00:38:57.804 "iops": 12242.342093220392, 00:38:57.804 "mibps": 47.821648801642155, 00:38:57.804 "io_failed": 0, 00:38:57.804 "io_timeout": 0, 00:38:57.804 "avg_latency_us": 83373.97583898994, 00:38:57.804 "min_latency_us": 10827.686956521738, 00:38:57.804 "max_latency_us": 52656.751304347825 00:38:57.804 } 00:38:57.804 ], 00:38:57.804 "core_count": 1 00:38:57.804 } 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1518520 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1518520 ']' 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1518520 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1518520 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1518520' 00:38:57.804 killing process with pid 1518520 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1518520 00:38:57.804 Received shutdown signal, test time was about 10.000000 seconds 00:38:57.804 00:38:57.804 Latency(us) 00:38:57.804 [2024-12-07T09:13:26.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:57.804 [2024-12-07T09:13:26.530Z] =================================================================================================================== 00:38:57.804 [2024-12-07T09:13:26.530Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1518520 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:57.804 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:57.804 rmmod nvme_tcp 00:38:58.062 rmmod nvme_fabrics 00:38:58.062 rmmod nvme_keyring 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 1518433 ']' 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 1518433 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1518433 ']' 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1518433 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1518433 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1518433' 00:38:58.062 killing process with pid 1518433 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1518433 00:38:58.062 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1518433 00:38:58.321 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:58.321 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:58.321 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:58.321 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:58.321 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:38:58.321 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:58.321 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:38:58.321 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:58.321 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:58.321 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.321 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:58.321 10:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:00.218 10:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:00.218 00:39:00.218 real 0m19.178s 00:39:00.218 user 0m22.540s 00:39:00.218 sys 0m5.873s 00:39:00.218 10:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:00.218 10:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:00.218 ************************************ 00:39:00.218 END TEST nvmf_queue_depth 00:39:00.218 ************************************ 00:39:00.218 10:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:00.218 10:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:00.218 10:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:00.218 10:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:00.478 ************************************ 00:39:00.478 START TEST nvmf_target_multipath 00:39:00.478 ************************************ 00:39:00.478 10:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:00.478 * Looking for test storage... 00:39:00.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:00.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.478 --rc genhtml_branch_coverage=1 00:39:00.478 --rc genhtml_function_coverage=1 00:39:00.478 --rc genhtml_legend=1 00:39:00.478 --rc geninfo_all_blocks=1 00:39:00.478 --rc geninfo_unexecuted_blocks=1 00:39:00.478 00:39:00.478 ' 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:00.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.478 --rc genhtml_branch_coverage=1 00:39:00.478 --rc genhtml_function_coverage=1 00:39:00.478 --rc genhtml_legend=1 00:39:00.478 --rc geninfo_all_blocks=1 00:39:00.478 --rc geninfo_unexecuted_blocks=1 00:39:00.478 00:39:00.478 ' 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:00.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.478 --rc genhtml_branch_coverage=1 00:39:00.478 --rc genhtml_function_coverage=1 00:39:00.478 --rc genhtml_legend=1 00:39:00.478 --rc geninfo_all_blocks=1 00:39:00.478 --rc geninfo_unexecuted_blocks=1 00:39:00.478 00:39:00.478 ' 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:00.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.478 --rc genhtml_branch_coverage=1 00:39:00.478 --rc genhtml_function_coverage=1 00:39:00.478 --rc genhtml_legend=1 00:39:00.478 --rc geninfo_all_blocks=1 00:39:00.478 --rc geninfo_unexecuted_blocks=1 00:39:00.478 00:39:00.478 ' 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.478 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:00.479 10:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:05.747 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:05.747 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:05.747 Found net devices under 0000:86:00.0: cvl_0_0 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:05.747 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:05.748 Found net devices under 0000:86:00.1: cvl_0_1 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:05.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:05.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:39:05.748 00:39:05.748 --- 10.0.0.2 ping statistics --- 00:39:05.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:05.748 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:05.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:05.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:39:05.748 00:39:05.748 --- 10.0.0.1 ping statistics --- 00:39:05.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:05.748 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:05.748 only one NIC for nvmf test 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:05.748 rmmod nvme_tcp 00:39:05.748 rmmod nvme_fabrics 00:39:05.748 rmmod nvme_keyring 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:05.748 10:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:07.653 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:07.653 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:07.653 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:07.653 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:07.653 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:07.912 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:07.912 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:07.912 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:07.912 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:07.912 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:07.912 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:07.912 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:07.912 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:39:07.912 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:07.912 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:07.912 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:07.912 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:07.913 00:39:07.913 real 0m7.437s 00:39:07.913 user 0m1.441s 00:39:07.913 sys 0m3.887s 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:07.913 ************************************ 00:39:07.913 END TEST nvmf_target_multipath 00:39:07.913 ************************************ 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:07.913 ************************************ 00:39:07.913 START TEST nvmf_zcopy 00:39:07.913 ************************************ 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:07.913 * Looking for test storage... 00:39:07.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:07.913 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:08.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.173 --rc genhtml_branch_coverage=1 00:39:08.173 --rc genhtml_function_coverage=1 00:39:08.173 --rc genhtml_legend=1 00:39:08.173 --rc geninfo_all_blocks=1 00:39:08.173 --rc geninfo_unexecuted_blocks=1 00:39:08.173 00:39:08.173 ' 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:08.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.173 --rc genhtml_branch_coverage=1 00:39:08.173 --rc genhtml_function_coverage=1 00:39:08.173 --rc genhtml_legend=1 00:39:08.173 --rc geninfo_all_blocks=1 00:39:08.173 --rc geninfo_unexecuted_blocks=1 00:39:08.173 00:39:08.173 ' 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:08.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.173 --rc genhtml_branch_coverage=1 00:39:08.173 --rc genhtml_function_coverage=1 00:39:08.173 --rc genhtml_legend=1 00:39:08.173 --rc geninfo_all_blocks=1 00:39:08.173 --rc geninfo_unexecuted_blocks=1 00:39:08.173 00:39:08.173 ' 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:08.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.173 --rc genhtml_branch_coverage=1 00:39:08.173 --rc genhtml_function_coverage=1 00:39:08.173 --rc genhtml_legend=1 00:39:08.173 --rc geninfo_all_blocks=1 00:39:08.173 --rc geninfo_unexecuted_blocks=1 00:39:08.173 00:39:08.173 ' 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:08.173 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:08.174 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:13.448 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:13.448 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:13.448 Found net devices under 0000:86:00.0: cvl_0_0 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:13.448 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:13.449 Found net devices under 0000:86:00.1: cvl_0_1 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:13.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:13.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:39:13.449 00:39:13.449 --- 10.0.0.2 ping statistics --- 00:39:13.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:13.449 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:13.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:13.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:39:13.449 00:39:13.449 --- 10.0.0.1 ping statistics --- 00:39:13.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:13.449 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=1527000 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 1527000 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1527000 ']' 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:13.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:13.449 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:13.449 [2024-12-07 10:13:42.024846] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:13.449 [2024-12-07 10:13:42.025776] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:13.449 [2024-12-07 10:13:42.025810] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:13.449 [2024-12-07 10:13:42.084770] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.449 [2024-12-07 10:13:42.124801] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:13.449 [2024-12-07 10:13:42.124839] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:13.449 [2024-12-07 10:13:42.124847] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:13.449 [2024-12-07 10:13:42.124853] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:13.449 [2024-12-07 10:13:42.124858] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:13.449 [2024-12-07 10:13:42.124882] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:13.709 [2024-12-07 10:13:42.185795] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:13.709 [2024-12-07 10:13:42.186029] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:13.709 [2024-12-07 10:13:42.245565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:13.709 [2024-12-07 10:13:42.261762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:13.709 malloc0 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:13.709 { 00:39:13.709 "params": { 00:39:13.709 "name": "Nvme$subsystem", 00:39:13.709 "trtype": "$TEST_TRANSPORT", 00:39:13.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:13.709 "adrfam": "ipv4", 00:39:13.709 "trsvcid": "$NVMF_PORT", 00:39:13.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:13.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:13.709 "hdgst": ${hdgst:-false}, 00:39:13.709 "ddgst": ${ddgst:-false} 00:39:13.709 }, 00:39:13.709 "method": "bdev_nvme_attach_controller" 00:39:13.709 } 00:39:13.709 EOF 00:39:13.709 )") 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:39:13.709 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:13.709 "params": { 00:39:13.709 "name": "Nvme1", 00:39:13.709 "trtype": "tcp", 00:39:13.709 "traddr": "10.0.0.2", 00:39:13.709 "adrfam": "ipv4", 00:39:13.709 "trsvcid": "4420", 00:39:13.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:13.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:13.709 "hdgst": false, 00:39:13.709 "ddgst": false 00:39:13.709 }, 00:39:13.709 "method": "bdev_nvme_attach_controller" 00:39:13.709 }' 00:39:13.709 [2024-12-07 10:13:42.352773] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:13.709 [2024-12-07 10:13:42.352816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527105 ] 00:39:13.709 [2024-12-07 10:13:42.407497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.968 [2024-12-07 10:13:42.447970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.968 Running I/O for 10 seconds... 00:39:16.320 8126.00 IOPS, 63.48 MiB/s [2024-12-07T09:13:46.040Z] 8154.00 IOPS, 63.70 MiB/s [2024-12-07T09:13:46.988Z] 8155.33 IOPS, 63.71 MiB/s [2024-12-07T09:13:47.926Z] 8161.25 IOPS, 63.76 MiB/s [2024-12-07T09:13:48.862Z] 8168.00 IOPS, 63.81 MiB/s [2024-12-07T09:13:49.798Z] 8177.83 IOPS, 63.89 MiB/s [2024-12-07T09:13:50.735Z] 8182.57 IOPS, 63.93 MiB/s [2024-12-07T09:13:51.687Z] 8165.50 IOPS, 63.79 MiB/s [2024-12-07T09:13:53.061Z] 8173.11 IOPS, 63.85 MiB/s [2024-12-07T09:13:53.061Z] 8177.70 IOPS, 63.89 MiB/s 00:39:24.336 Latency(us) 00:39:24.336 [2024-12-07T09:13:53.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:24.336 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:24.336 Verification LBA range: start 0x0 length 0x1000 00:39:24.336 Nvme1n1 : 10.01 8180.62 63.91 0.00 0.00 15602.81 477.27 21769.35 00:39:24.336 [2024-12-07T09:13:53.062Z] =================================================================================================================== 00:39:24.336 [2024-12-07T09:13:53.062Z] Total : 8180.62 63.91 0.00 0.00 15602.81 477.27 21769.35 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1528716 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:39:24.336 [2024-12-07 10:13:52.849240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.849270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:24.336 { 00:39:24.336 "params": { 00:39:24.336 "name": "Nvme$subsystem", 00:39:24.336 "trtype": "$TEST_TRANSPORT", 00:39:24.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:24.336 "adrfam": "ipv4", 00:39:24.336 "trsvcid": "$NVMF_PORT", 00:39:24.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:24.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:24.336 "hdgst": ${hdgst:-false}, 00:39:24.336 "ddgst": ${ddgst:-false} 00:39:24.336 }, 00:39:24.336 "method": "bdev_nvme_attach_controller" 00:39:24.336 } 00:39:24.336 EOF 00:39:24.336 )") 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:39:24.336 [2024-12-07 10:13:52.857198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.857210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:39:24.336 10:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:24.336 "params": { 00:39:24.336 "name": "Nvme1", 00:39:24.336 "trtype": "tcp", 00:39:24.336 "traddr": "10.0.0.2", 00:39:24.336 "adrfam": "ipv4", 00:39:24.336 "trsvcid": "4420", 00:39:24.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:24.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:24.336 "hdgst": false, 00:39:24.336 "ddgst": false 00:39:24.336 }, 00:39:24.336 "method": "bdev_nvme_attach_controller" 00:39:24.336 }' 00:39:24.336 [2024-12-07 10:13:52.865197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.865207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.873195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.873204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.881196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.881209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.889194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.889204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.891615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:24.336 [2024-12-07 10:13:52.891656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528716 ] 00:39:24.336 [2024-12-07 10:13:52.897197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.897206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.905196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.905205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.913194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.913203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.921196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.921205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.929197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.929206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.937196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.937216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.945196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.945205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.945614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.336 [2024-12-07 10:13:52.953199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.953222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.961198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.961218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.969200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.969232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.977197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.977219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.985201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.985223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:52.986379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:24.336 [2024-12-07 10:13:52.993198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:52.993219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:53.001209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:53.001237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:53.009203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.336 [2024-12-07 10:13:53.009233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.336 [2024-12-07 10:13:53.017204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.337 [2024-12-07 10:13:53.017226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.337 [2024-12-07 10:13:53.025198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.337 [2024-12-07 10:13:53.025219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.337 [2024-12-07 10:13:53.033202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.337 [2024-12-07 10:13:53.033226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.337 [2024-12-07 10:13:53.041199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.337 [2024-12-07 10:13:53.041220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.337 [2024-12-07 10:13:53.049200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.337 [2024-12-07 10:13:53.049222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.337 [2024-12-07 10:13:53.057198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.337 [2024-12-07 10:13:53.057207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.065218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.065238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.073205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.073232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.081203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.081227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.089230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.089245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.097203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.097227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.105199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.105222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.113198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.113218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.121197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.121217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.129197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.129207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.137198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.137217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.145199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.145220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.153199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.153223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.161203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.161233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.169197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.169207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.177196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.177205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.185198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.185218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.193202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.193225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.201198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.201218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.209197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.209207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.217197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.217217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.225197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.225218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.233198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.233220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.281246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.281265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.595 [2024-12-07 10:13:53.289201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.595 [2024-12-07 10:13:53.289214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.596 Running I/O for 5 seconds... 00:39:24.596 [2024-12-07 10:13:53.303292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.596 [2024-12-07 10:13:53.303312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.596 [2024-12-07 10:13:53.311241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.596 [2024-12-07 10:13:53.311260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.319488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.319507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.328096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.328114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.340379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.340398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.354167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.354186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.363443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.363462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.370733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.370755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.381433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.381451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.388985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.389003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.400179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.400198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.414629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.414647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.424705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.424723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.439423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.439442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.454643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.454663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.464058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.464077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.478426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.478446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.488116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.488137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.502506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.502526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.512190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.512209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.526432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.526451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.535777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.535795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.550625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.550644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.560123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.560141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.856 [2024-12-07 10:13:53.574633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.856 [2024-12-07 10:13:53.574652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.114 [2024-12-07 10:13:53.584172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.114 [2024-12-07 10:13:53.584191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.114 [2024-12-07 10:13:53.598826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.114 [2024-12-07 10:13:53.598846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.114 [2024-12-07 10:13:53.608695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.114 [2024-12-07 10:13:53.608714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.114 [2024-12-07 10:13:53.623093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.114 [2024-12-07 10:13:53.623113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.114 [2024-12-07 10:13:53.630765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.114 [2024-12-07 10:13:53.630791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.114 [2024-12-07 10:13:53.640551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.114 [2024-12-07 10:13:53.640570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.114 [2024-12-07 10:13:53.654929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.114 [2024-12-07 10:13:53.654955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.114 [2024-12-07 10:13:53.663108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.114 [2024-12-07 10:13:53.663126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.114 [2024-12-07 10:13:53.671290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.114 [2024-12-07 10:13:53.671308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.114 [2024-12-07 10:13:53.686262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.114 [2024-12-07 10:13:53.686282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.114 [2024-12-07 10:13:53.695967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.114 [2024-12-07 10:13:53.695987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.115 [2024-12-07 10:13:53.710984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.115 [2024-12-07 10:13:53.711005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.115 [2024-12-07 10:13:53.720176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.115 [2024-12-07 10:13:53.720195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.115 [2024-12-07 10:13:53.734236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.115 [2024-12-07 10:13:53.734255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.115 [2024-12-07 10:13:53.743936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.115 [2024-12-07 10:13:53.743960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.115 [2024-12-07 10:13:53.758437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.115 [2024-12-07 10:13:53.758456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.115 [2024-12-07 10:13:53.768303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.115 [2024-12-07 10:13:53.768321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.115 [2024-12-07 10:13:53.782995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.115 [2024-12-07 10:13:53.783014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.115 [2024-12-07 10:13:53.791085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.115 [2024-12-07 10:13:53.791103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.115 [2024-12-07 10:13:53.805611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.115 [2024-12-07 10:13:53.805630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.115 [2024-12-07 10:13:53.817325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.115 [2024-12-07 10:13:53.817343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.115 [2024-12-07 10:13:53.829102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.115 [2024-12-07 10:13:53.829121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.843291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.843309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.851096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.851114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.867067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.867086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.882384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.882403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.892118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.892136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.906910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.906929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.921582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.921600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.932060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.932078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.946953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.946971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.955167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.955185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.963505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.963524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.971828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.971845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.985819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.985837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:53.997803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:53.997822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:54.008440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:54.008458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:54.022825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:54.022844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:54.031873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:54.031891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:54.046844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:54.046863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:54.056134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:54.056153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:54.070922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:54.070941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.373 [2024-12-07 10:13:54.079907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.373 [2024-12-07 10:13:54.079926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.374 [2024-12-07 10:13:54.094395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.374 [2024-12-07 10:13:54.094415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.104150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.104168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.118825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.118845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.129753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.129771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.141299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.141317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.148586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.148605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.160919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.160938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.174476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.174496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.184801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.184819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.198756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.198774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.206928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.206950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.222307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.222326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.232066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.232085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.247175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.247195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.255079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.255102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.269609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.269628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.280857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.280876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.295010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.295029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 15707.00 IOPS, 122.71 MiB/s [2024-12-07T09:13:54.358Z] [2024-12-07 10:13:54.302808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.302827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.312921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.312940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.326728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.326747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.336185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.336204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.632 [2024-12-07 10:13:54.350319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.632 [2024-12-07 10:13:54.350338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.361330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.361348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.373105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.373123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.387415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.387434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.402633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.402652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.412785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.412802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.426731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.426750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.434574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.434591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.445527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.445544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.455992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.456011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.470173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.470191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.479637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.479660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.494978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.494997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.504621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.504639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.518665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.518683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.526543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.526562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.541616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.541634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.553112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.553131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.566096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.566114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.576529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.576547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.591204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.591222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.890 [2024-12-07 10:13:54.606524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.890 [2024-12-07 10:13:54.606544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.618151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.618171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.628613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.628632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.642576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.642595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.650438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.650458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.661567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.661586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.673114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.673134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.687220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.687239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.702251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.702282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.711797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.711820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.727128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.727147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.742537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.742556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.751983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.752002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.767138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.767157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.774928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.774954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.784217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.784236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.798232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.798250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.808134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.808153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.822329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.822348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.831692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.831711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.846459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.846482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.855708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.855727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.148 [2024-12-07 10:13:54.870820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.148 [2024-12-07 10:13:54.870840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:54.880166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:54.880185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:54.894780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:54.894799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:54.904615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:54.904635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:54.918596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:54.918616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:54.927901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:54.927920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:54.942478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:54.942504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:54.952027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:54.952046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:54.966280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:54.966300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:54.976589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:54.976608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:54.991615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:54.991636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:55.006058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:55.006077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:55.017941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:55.017969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:55.028441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:55.028460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:55.042572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:55.042590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:55.053303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:55.053322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:55.060122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:55.060141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:55.068753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:55.068772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:55.080283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:55.080302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:55.094552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:55.094570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:55.104832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:55.104850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:55.118228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:55.118246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.406 [2024-12-07 10:13:55.129141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.406 [2024-12-07 10:13:55.129160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.142198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.142217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.151440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.151460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.158298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.158317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.168746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.168766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.180852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.180872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.194863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.194882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.202874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.202893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.210608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.210627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.225173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.225194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.237691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.237710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.248554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.248573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.263403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.263423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.271172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.271192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.279456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.664 [2024-12-07 10:13:55.279476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.664 [2024-12-07 10:13:55.287428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.665 [2024-12-07 10:13:55.287447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.665 [2024-12-07 10:13:55.297013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.665 [2024-12-07 10:13:55.297033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.665 15754.00 IOPS, 123.08 MiB/s [2024-12-07T09:13:55.391Z] [2024-12-07 10:13:55.310371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.665 [2024-12-07 10:13:55.310390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.665 [2024-12-07 10:13:55.319836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.665 [2024-12-07 10:13:55.319855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.665 [2024-12-07 10:13:55.334576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.665 [2024-12-07 10:13:55.334596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.665 [2024-12-07 10:13:55.343829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.665 [2024-12-07 10:13:55.343848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.665 [2024-12-07 10:13:55.359140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.665 [2024-12-07 10:13:55.359160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.665 [2024-12-07 10:13:55.367278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.665 [2024-12-07 10:13:55.367297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.665 [2024-12-07 10:13:55.375164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.665 [2024-12-07 10:13:55.375183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.921 [2024-12-07 10:13:55.390522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.921 [2024-12-07 10:13:55.390542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.921 [2024-12-07 10:13:55.400085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.921 [2024-12-07 10:13:55.400104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.921 [2024-12-07 10:13:55.415086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.921 [2024-12-07 10:13:55.415104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.921 [2024-12-07 10:13:55.424433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.921 [2024-12-07 10:13:55.424451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.921 [2024-12-07 10:13:55.438746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.921 [2024-12-07 10:13:55.438765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.921 [2024-12-07 10:13:55.446914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.446933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.455139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.455157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.463384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.463402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.471610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.471627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.479479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.479497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.493824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.493843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.505399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.505416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.518664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.518683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.528112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.528131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.543166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.543185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.552606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.552624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.566535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.566559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.575764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.575782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.590560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.590579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.600405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.600424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.615130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.615149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.622946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.622969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.631286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.631304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.922 [2024-12-07 10:13:55.639473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.922 [2024-12-07 10:13:55.639490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.647801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.647819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.661670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.661689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.673387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.673406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.680112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.680130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.692339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.692357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.706374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.706392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.715997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.716015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.731190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.731209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.739441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.739460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.747263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.747282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.755440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.755459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.770530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.770552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.780351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.780369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.795557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.795575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.803527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.803545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.811320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.811338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.820309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.820327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.834037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.834055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.845661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.845679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.858528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.858546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.868108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.868126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.883084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.883101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.891022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.891039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.180 [2024-12-07 10:13:55.899126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.180 [2024-12-07 10:13:55.899144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:55.908844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:55.908863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:55.922989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:55.923008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:55.938389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:55.938408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:55.949551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:55.949569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:55.962733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:55.962752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:55.970832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:55.970850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:55.979884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:55.979907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:55.994677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:55.994696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.004485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.004503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.019484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.019503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.027488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.027506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.035372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.035391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.043590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.043608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.051881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.051900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.060140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.060159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.071843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.071862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.086930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.086954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.096020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.096038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.111171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.111190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.119361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.119380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.127609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.127628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.135420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.135438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.150277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.150296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.438 [2024-12-07 10:13:56.159936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.438 [2024-12-07 10:13:56.159961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.174530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.174548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.184198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.184223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.198239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.198257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.207822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.207840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.222595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.222614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.231932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.231958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.246737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.246756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.256325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.256343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.270652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.270670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.278615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.278632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.293154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.293174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 15804.33 IOPS, 123.47 MiB/s [2024-12-07T09:13:56.422Z] [2024-12-07 10:13:56.305059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.305079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.318550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.318569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.328155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.328173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.342467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.342485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.350468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.350487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.360439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.360458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.373788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.696 [2024-12-07 10:13:56.373806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.696 [2024-12-07 10:13:56.385706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.697 [2024-12-07 10:13:56.385724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.697 [2024-12-07 10:13:56.399052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.697 [2024-12-07 10:13:56.399072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.697 [2024-12-07 10:13:56.407619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.697 [2024-12-07 10:13:56.407637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.697 [2024-12-07 10:13:56.415588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.697 [2024-12-07 10:13:56.415607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.955 [2024-12-07 10:13:56.430553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.955 [2024-12-07 10:13:56.430575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.955 [2024-12-07 10:13:56.440184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.955 [2024-12-07 10:13:56.440204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.955 [2024-12-07 10:13:56.454830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.955 [2024-12-07 10:13:56.454851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.955 [2024-12-07 10:13:56.464247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.955 [2024-12-07 10:13:56.464266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.955 [2024-12-07 10:13:56.478184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.955 [2024-12-07 10:13:56.478205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.955 [2024-12-07 10:13:56.489093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.955 [2024-12-07 10:13:56.489113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.955 [2024-12-07 10:13:56.502232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.955 [2024-12-07 10:13:56.502252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.955 [2024-12-07 10:13:56.513145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.955 [2024-12-07 10:13:56.513165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.955 [2024-12-07 10:13:56.520034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.955 [2024-12-07 10:13:56.520053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.528866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.528884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.540539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.540559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.554490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.554510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.563726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.563745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.579591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.579610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.587577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.587596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.595749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.595768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.610131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.610149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.621036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.621056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.635261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.635279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.643069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.643089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.658276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.658295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.956 [2024-12-07 10:13:56.668152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.956 [2024-12-07 10:13:56.668171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.682814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.682834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.692357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.692378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.707093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.707113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.716595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.716614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.731006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.731026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.746624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.746644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.757478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.757497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.770169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.770189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.779837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.779856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.794823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.794842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.802841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.802860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.813081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.813101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.826145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.826163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.835867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.835885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.850935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.850961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.859225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.859244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.867631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.867650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.876217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.876235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.889885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.889904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.901159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.901178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.908038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.908056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.920628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.920647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.215 [2024-12-07 10:13:56.933833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.215 [2024-12-07 10:13:56.933851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.474 [2024-12-07 10:13:56.944843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.474 [2024-12-07 10:13:56.944860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.474 [2024-12-07 10:13:56.959784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.474 [2024-12-07 10:13:56.959803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.474 [2024-12-07 10:13:56.967838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.474 [2024-12-07 10:13:56.967856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.474 [2024-12-07 10:13:56.976330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.474 [2024-12-07 10:13:56.976348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.474 [2024-12-07 10:13:56.989534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.474 [2024-12-07 10:13:56.989553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.474 [2024-12-07 10:13:57.001919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.001939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.013121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.013141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.026523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.026541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.037246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.037264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.044468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.044490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.053264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.053283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.065177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.065195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.076854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.076873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.091223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.091241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.099255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.099273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.109055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.109073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.122181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.122199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.132128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.132147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.146668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.146687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.156143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.156162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.170855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.170874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.180523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.180542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.475 [2024-12-07 10:13:57.195475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.475 [2024-12-07 10:13:57.195495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.203175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.203194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.211317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.211335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.221226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.221245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.232151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.232170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.247449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.247468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.255512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.255535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.263638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.263658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.271766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.271785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.285974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.285993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.297222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.297240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.304343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.304361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 15791.25 IOPS, 123.37 MiB/s [2024-12-07T09:13:57.460Z] [2024-12-07 10:13:57.312802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.312822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.325037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.734 [2024-12-07 10:13:57.325057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.734 [2024-12-07 10:13:57.339062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.735 [2024-12-07 10:13:57.339083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.735 [2024-12-07 10:13:57.347041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.735 [2024-12-07 10:13:57.347060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.735 [2024-12-07 10:13:57.355462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.735 [2024-12-07 10:13:57.355480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.735 [2024-12-07 10:13:57.363673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.735 [2024-12-07 10:13:57.363692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.735 [2024-12-07 10:13:57.377985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.735 [2024-12-07 10:13:57.378005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.735 [2024-12-07 10:13:57.387332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.735 [2024-12-07 10:13:57.387350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.735 [2024-12-07 10:13:57.394662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.735 [2024-12-07 10:13:57.394680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.735 [2024-12-07 10:13:57.405438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.735 [2024-12-07 10:13:57.405455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.735 [2024-12-07 10:13:57.417290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.735 [2024-12-07 10:13:57.417309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.735 [2024-12-07 10:13:57.430207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.735 [2024-12-07 10:13:57.430226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.735 [2024-12-07 10:13:57.439937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.735 [2024-12-07 10:13:57.439962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.735 [2024-12-07 10:13:57.454525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.735 [2024-12-07 10:13:57.454548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.464265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.464284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.478781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.478800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.488545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.488564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.502293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.502312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.511695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.511714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.526415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.526433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.537889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.537908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.549109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.549128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.562756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.562775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.573504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.573522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.587369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.587387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.595353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.595372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.603551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.603569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.618598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.618616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.628487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.628506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.642149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.642167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.653506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.653524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.666658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.666676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.674653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.674671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.683251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.683269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.691422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.691441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.705910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.705928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:28.994 [2024-12-07 10:13:57.715557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:28.994 [2024-12-07 10:13:57.715576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.730957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.730976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.740394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.740412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.755662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.755680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.763624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.763644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.772152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.772170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.785916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.785935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.795552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.795570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.802805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.802823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.813627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.813644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.825556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.825575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.838625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.838643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.846265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.846283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.857519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.857537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.868870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.868888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.882196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.882215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.892829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.892848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.907096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.907115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.915422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.915441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.924706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.924726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.938620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.938639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.947961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.947981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.962628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.254 [2024-12-07 10:13:57.962647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.254 [2024-12-07 10:13:57.972050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.255 [2024-12-07 10:13:57.972069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:57.987007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:57.987027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:57.995019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:57.995038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.010250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.010268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.022091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.022110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.033626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.033645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.045268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.045287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.052353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.052372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.061056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.061075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.072539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.072558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.086396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.086415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.095940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.095965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.109918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.109937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.121811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.121829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.133285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.133304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.146615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.146634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.155018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.155036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.165089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.165109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.178873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.178899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.188520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.188539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.202737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.202757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.212041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.212060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.226384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.226403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.514 [2024-12-07 10:13:58.234607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.514 [2024-12-07 10:13:58.234626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.245039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.245059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.258422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.258441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.268002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.268021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.283001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.283021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.305177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.305197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 15776.60 IOPS, 123.25 MiB/s [2024-12-07T09:13:58.500Z] [2024-12-07 10:13:58.312201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.312226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 00:39:29.774 Latency(us) 00:39:29.774 [2024-12-07T09:13:58.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:29.774 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:29.774 Nvme1n1 : 5.01 15784.85 123.32 0.00 0.00 8102.35 2194.03 13506.11 00:39:29.774 [2024-12-07T09:13:58.500Z] =================================================================================================================== 00:39:29.774 [2024-12-07T09:13:58.500Z] Total : 15784.85 123.32 0.00 0.00 8102.35 2194.03 13506.11 00:39:29.774 [2024-12-07 10:13:58.317201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.317217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.325198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.325215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.333202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.333213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.341212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.341230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.349204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.349218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.357197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.357208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.365200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.365213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.373197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.373209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.381198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.381211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.389196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.389207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.397196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.397207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.405197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.405208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.413195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.413206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.421194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.421203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.774 [2024-12-07 10:13:58.429194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.774 [2024-12-07 10:13:58.429203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.775 [2024-12-07 10:13:58.437199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.775 [2024-12-07 10:13:58.437215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.775 [2024-12-07 10:13:58.445199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.775 [2024-12-07 10:13:58.445212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.775 [2024-12-07 10:13:58.453200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.775 [2024-12-07 10:13:58.453212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.775 [2024-12-07 10:13:58.461195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.775 [2024-12-07 10:13:58.461204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.775 [2024-12-07 10:13:58.469195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.775 [2024-12-07 10:13:58.469206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.775 [2024-12-07 10:13:58.477198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.775 [2024-12-07 10:13:58.477208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.775 [2024-12-07 10:13:58.485195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:29.775 [2024-12-07 10:13:58.485204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1528716) - No such process 00:39:29.775 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1528716 00:39:29.775 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:29.775 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.775 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:30.034 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.034 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:30.034 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.034 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:30.034 delay0 00:39:30.034 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.034 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:30.034 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.034 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:30.034 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.034 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:30.034 [2024-12-07 10:13:58.651018] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:36.602 Initializing NVMe Controllers 00:39:36.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:36.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:36.602 Initialization complete. Launching workers. 00:39:36.602 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3198 00:39:36.602 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3474, failed to submit 44 00:39:36.602 success 3349, unsuccessful 125, failed 0 00:39:36.602 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:36.602 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:36.602 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:36.602 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:36.602 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:36.602 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:36.602 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:36.602 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:36.602 rmmod nvme_tcp 00:39:36.602 rmmod nvme_fabrics 00:39:36.602 rmmod nvme_keyring 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 1527000 ']' 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 1527000 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1527000 ']' 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1527000 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527000 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527000' 00:39:36.859 killing process with pid 1527000 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1527000 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1527000 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:36.859 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:37.117 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:37.117 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:37.117 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:37.117 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:39:37.117 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:39:37.117 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:37.117 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:37.117 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.117 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:37.117 10:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:39.018 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:39.018 00:39:39.018 real 0m31.178s 00:39:39.018 user 0m40.527s 00:39:39.018 sys 0m12.093s 00:39:39.018 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:39.018 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:39.018 ************************************ 00:39:39.018 END TEST nvmf_zcopy 00:39:39.018 ************************************ 00:39:39.018 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:39.018 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:39.018 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:39.018 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:39.018 ************************************ 00:39:39.018 START TEST nvmf_nmic 00:39:39.018 ************************************ 00:39:39.018 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:39.278 * Looking for test storage... 00:39:39.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:39.278 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:39.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:39.279 --rc genhtml_branch_coverage=1 00:39:39.279 --rc genhtml_function_coverage=1 00:39:39.279 --rc genhtml_legend=1 00:39:39.279 --rc geninfo_all_blocks=1 00:39:39.279 --rc geninfo_unexecuted_blocks=1 00:39:39.279 00:39:39.279 ' 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:39.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:39.279 --rc genhtml_branch_coverage=1 00:39:39.279 --rc genhtml_function_coverage=1 00:39:39.279 --rc genhtml_legend=1 00:39:39.279 --rc geninfo_all_blocks=1 00:39:39.279 --rc geninfo_unexecuted_blocks=1 00:39:39.279 00:39:39.279 ' 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:39.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:39.279 --rc genhtml_branch_coverage=1 00:39:39.279 --rc genhtml_function_coverage=1 00:39:39.279 --rc genhtml_legend=1 00:39:39.279 --rc geninfo_all_blocks=1 00:39:39.279 --rc geninfo_unexecuted_blocks=1 00:39:39.279 00:39:39.279 ' 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:39.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:39.279 --rc genhtml_branch_coverage=1 00:39:39.279 --rc genhtml_function_coverage=1 00:39:39.279 --rc genhtml_legend=1 00:39:39.279 --rc geninfo_all_blocks=1 00:39:39.279 --rc geninfo_unexecuted_blocks=1 00:39:39.279 00:39:39.279 ' 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:39.279 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:39.280 10:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:44.546 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:44.546 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:44.546 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:44.546 Found net devices under 0000:86:00.0: cvl_0_0 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:44.547 Found net devices under 0000:86:00.1: cvl_0_1 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:44.547 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:44.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:44.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:39:44.805 00:39:44.805 --- 10.0.0.2 ping statistics --- 00:39:44.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:44.805 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:44.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:44.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:39:44.805 00:39:44.805 --- 10.0.0.1 ping statistics --- 00:39:44.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:44.805 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=1534065 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 1534065 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1534065 ']' 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:44.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:44.805 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:44.806 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:44.806 [2024-12-07 10:14:13.379040] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:44.806 [2024-12-07 10:14:13.379971] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:44.806 [2024-12-07 10:14:13.380007] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:44.806 [2024-12-07 10:14:13.438388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:44.806 [2024-12-07 10:14:13.481849] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:44.806 [2024-12-07 10:14:13.481887] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:44.806 [2024-12-07 10:14:13.481894] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:44.806 [2024-12-07 10:14:13.481900] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:44.806 [2024-12-07 10:14:13.481905] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:44.806 [2024-12-07 10:14:13.481955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:44.806 [2024-12-07 10:14:13.482024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:44.806 [2024-12-07 10:14:13.482123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:44.806 [2024-12-07 10:14:13.482124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.065 [2024-12-07 10:14:13.558887] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:45.065 [2024-12-07 10:14:13.558995] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:45.065 [2024-12-07 10:14:13.559217] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:45.065 [2024-12-07 10:14:13.559534] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:45.065 [2024-12-07 10:14:13.559779] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.065 [2024-12-07 10:14:13.618605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.065 Malloc0 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.065 [2024-12-07 10:14:13.666760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:45.065 test case1: single bdev can't be used in multiple subsystems 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.065 [2024-12-07 10:14:13.694508] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:45.065 [2024-12-07 10:14:13.694527] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:45.065 [2024-12-07 10:14:13.694535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.065 request: 00:39:45.065 { 00:39:45.065 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:45.065 "namespace": { 00:39:45.065 "bdev_name": "Malloc0", 00:39:45.065 "no_auto_visible": false 00:39:45.065 }, 00:39:45.065 "method": "nvmf_subsystem_add_ns", 00:39:45.065 "req_id": 1 00:39:45.065 } 00:39:45.065 Got JSON-RPC error response 00:39:45.065 response: 00:39:45.065 { 00:39:45.065 "code": -32602, 00:39:45.065 "message": "Invalid parameters" 00:39:45.065 } 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:45.065 Adding namespace failed - expected result. 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:45.065 test case2: host connect to nvmf target in multiple paths 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:45.065 [2024-12-07 10:14:13.706609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.065 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:45.324 10:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:45.582 10:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:45.582 10:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:39:45.582 10:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:45.582 10:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:45.582 10:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:39:47.492 10:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:47.492 10:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:47.492 10:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:47.492 10:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:39:47.492 10:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:47.492 10:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:39:47.492 10:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:47.492 [global] 00:39:47.492 thread=1 00:39:47.492 invalidate=1 00:39:47.492 rw=write 00:39:47.492 time_based=1 00:39:47.492 runtime=1 00:39:47.492 ioengine=libaio 00:39:47.492 direct=1 00:39:47.492 bs=4096 00:39:47.492 iodepth=1 00:39:47.492 norandommap=0 00:39:47.492 numjobs=1 00:39:47.492 00:39:47.492 verify_dump=1 00:39:47.492 verify_backlog=512 00:39:47.492 verify_state_save=0 00:39:47.492 do_verify=1 00:39:47.492 verify=crc32c-intel 00:39:47.492 [job0] 00:39:47.492 filename=/dev/nvme0n1 00:39:47.492 Could not set queue depth (nvme0n1) 00:39:47.750 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.750 fio-3.35 00:39:47.750 Starting 1 thread 00:39:49.124 00:39:49.124 job0: (groupid=0, jobs=1): err= 0: pid=1534681: Sat Dec 7 10:14:17 2024 00:39:49.124 read: IOPS=2075, BW=8304KiB/s (8503kB/s)(8312KiB/1001msec) 00:39:49.124 slat (nsec): min=7267, max=41038, avg=8481.90, stdev=1513.07 00:39:49.124 clat (usec): min=211, max=408, avg=241.17, stdev=10.43 00:39:49.124 lat (usec): min=219, max=416, avg=249.65, stdev=10.41 00:39:49.124 clat percentiles (usec): 00:39:49.124 | 1.00th=[ 223], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 233], 00:39:49.124 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:39:49.124 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 253], 00:39:49.124 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 343], 99.95th=[ 379], 00:39:49.124 | 99.99th=[ 408] 00:39:49.124 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:39:49.124 slat (usec): min=10, max=29158, avg=23.63, stdev=576.06 00:39:49.124 clat (usec): min=136, max=362, avg=158.89, stdev=14.03 00:39:49.124 lat (usec): min=151, max=29490, avg=182.52, stdev=579.64 00:39:49.124 clat percentiles (usec): 00:39:49.124 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:39:49.124 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 157], 60.00th=[ 159], 00:39:49.124 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 172], 00:39:49.124 | 99.00th=[ 239], 99.50th=[ 241], 99.90th=[ 334], 99.95th=[ 363], 00:39:49.124 | 99.99th=[ 363] 00:39:49.124 bw ( KiB/s): min= 9708, max= 9708, per=94.90%, avg=9708.00, stdev= 0.00, samples=1 00:39:49.124 iops : min= 2427, max= 2427, avg=2427.00, stdev= 0.00, samples=1 00:39:49.124 lat (usec) : 250=94.31%, 500=5.69% 00:39:49.124 cpu : usr=4.40%, sys=7.00%, ctx=4641, majf=0, minf=1 00:39:49.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:49.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:49.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:49.124 issued rwts: total=2078,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:49.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:49.124 00:39:49.124 Run status group 0 (all jobs): 00:39:49.124 READ: bw=8304KiB/s (8503kB/s), 8304KiB/s-8304KiB/s (8503kB/s-8503kB/s), io=8312KiB (8511kB), run=1001-1001msec 00:39:49.124 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:39:49.124 00:39:49.124 Disk stats (read/write): 00:39:49.124 nvme0n1: ios=2073/2048, merge=0/0, ticks=865/313, in_queue=1178, util=98.70% 00:39:49.124 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:49.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:49.124 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:49.124 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:39:49.124 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:49.124 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:49.124 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:49.124 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:49.124 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:39:49.124 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:49.124 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:49.125 rmmod nvme_tcp 00:39:49.125 rmmod nvme_fabrics 00:39:49.125 rmmod nvme_keyring 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 1534065 ']' 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 1534065 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1534065 ']' 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1534065 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:49.125 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1534065 00:39:49.383 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:49.383 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:49.383 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1534065' 00:39:49.383 killing process with pid 1534065 00:39:49.383 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1534065 00:39:49.383 10:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1534065 00:39:49.383 10:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:49.383 10:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:49.383 10:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:49.383 10:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:49.383 10:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:39:49.383 10:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:49.383 10:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:39:49.383 10:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:49.383 10:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:49.383 10:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.383 10:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.383 10:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:51.925 00:39:51.925 real 0m12.429s 00:39:51.925 user 0m22.908s 00:39:51.925 sys 0m5.849s 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:51.925 ************************************ 00:39:51.925 END TEST nvmf_nmic 00:39:51.925 ************************************ 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:51.925 ************************************ 00:39:51.925 START TEST nvmf_fio_target 00:39:51.925 ************************************ 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:51.925 * Looking for test storage... 00:39:51.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:51.925 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:51.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.926 --rc genhtml_branch_coverage=1 00:39:51.926 --rc genhtml_function_coverage=1 00:39:51.926 --rc genhtml_legend=1 00:39:51.926 --rc geninfo_all_blocks=1 00:39:51.926 --rc geninfo_unexecuted_blocks=1 00:39:51.926 00:39:51.926 ' 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:51.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.926 --rc genhtml_branch_coverage=1 00:39:51.926 --rc genhtml_function_coverage=1 00:39:51.926 --rc genhtml_legend=1 00:39:51.926 --rc geninfo_all_blocks=1 00:39:51.926 --rc geninfo_unexecuted_blocks=1 00:39:51.926 00:39:51.926 ' 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:51.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.926 --rc genhtml_branch_coverage=1 00:39:51.926 --rc genhtml_function_coverage=1 00:39:51.926 --rc genhtml_legend=1 00:39:51.926 --rc geninfo_all_blocks=1 00:39:51.926 --rc geninfo_unexecuted_blocks=1 00:39:51.926 00:39:51.926 ' 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:51.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.926 --rc genhtml_branch_coverage=1 00:39:51.926 --rc genhtml_function_coverage=1 00:39:51.926 --rc genhtml_legend=1 00:39:51.926 --rc geninfo_all_blocks=1 00:39:51.926 --rc geninfo_unexecuted_blocks=1 00:39:51.926 00:39:51.926 ' 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:51.926 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:51.927 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:51.927 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:51.927 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:51.927 10:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:57.191 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:57.191 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:57.192 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:57.192 Found net devices under 0000:86:00.0: cvl_0_0 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:57.192 Found net devices under 0000:86:00.1: cvl_0_1 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:57.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:57.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:39:57.192 00:39:57.192 --- 10.0.0.2 ping statistics --- 00:39:57.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:57.192 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:57.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:57.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:39:57.192 00:39:57.192 --- 10.0.0.1 ping statistics --- 00:39:57.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:57.192 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=1538217 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 1538217 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1538217 ']' 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:57.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:57.192 10:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:57.192 [2024-12-07 10:14:25.850558] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:57.192 [2024-12-07 10:14:25.851483] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:57.193 [2024-12-07 10:14:25.851518] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:57.193 [2024-12-07 10:14:25.911647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:57.450 [2024-12-07 10:14:25.954081] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:57.450 [2024-12-07 10:14:25.954119] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:57.450 [2024-12-07 10:14:25.954126] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:57.450 [2024-12-07 10:14:25.954132] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:57.450 [2024-12-07 10:14:25.954137] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:57.450 [2024-12-07 10:14:25.954205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:57.450 [2024-12-07 10:14:25.954301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:57.450 [2024-12-07 10:14:25.954387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:57.450 [2024-12-07 10:14:25.954388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.450 [2024-12-07 10:14:26.029381] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:57.450 [2024-12-07 10:14:26.029434] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:57.450 [2024-12-07 10:14:26.029574] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:57.450 [2024-12-07 10:14:26.029886] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:57.450 [2024-12-07 10:14:26.030177] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:57.450 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:57.450 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:39:57.450 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:57.450 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:57.450 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:57.450 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:57.450 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:57.708 [2024-12-07 10:14:26.266872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:57.708 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:57.965 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:57.965 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:58.223 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:58.223 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:58.223 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:58.223 10:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:58.480 10:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:58.480 10:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:58.738 10:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:58.995 10:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:58.995 10:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:59.252 10:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:59.252 10:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:59.510 10:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:59.510 10:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:59.510 10:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:59.767 10:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:59.767 10:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:00.024 10:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:00.024 10:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:00.025 10:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:00.282 [2024-12-07 10:14:28.911001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:00.282 10:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:00.540 10:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:00.798 10:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:01.057 10:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:01.057 10:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:40:01.057 10:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:40:01.057 10:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:40:01.057 10:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:40:01.057 10:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:40:02.959 10:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:40:02.959 10:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:40:02.959 10:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:40:02.959 10:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:40:02.959 10:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:40:02.959 10:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:40:02.959 10:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:02.959 [global] 00:40:02.959 thread=1 00:40:02.959 invalidate=1 00:40:02.959 rw=write 00:40:02.959 time_based=1 00:40:02.959 runtime=1 00:40:02.959 ioengine=libaio 00:40:02.959 direct=1 00:40:02.959 bs=4096 00:40:02.959 iodepth=1 00:40:02.959 norandommap=0 00:40:02.959 numjobs=1 00:40:02.959 00:40:03.238 verify_dump=1 00:40:03.238 verify_backlog=512 00:40:03.238 verify_state_save=0 00:40:03.238 do_verify=1 00:40:03.238 verify=crc32c-intel 00:40:03.238 [job0] 00:40:03.238 filename=/dev/nvme0n1 00:40:03.238 [job1] 00:40:03.238 filename=/dev/nvme0n2 00:40:03.238 [job2] 00:40:03.238 filename=/dev/nvme0n3 00:40:03.238 [job3] 00:40:03.238 filename=/dev/nvme0n4 00:40:03.238 Could not set queue depth (nvme0n1) 00:40:03.238 Could not set queue depth (nvme0n2) 00:40:03.238 Could not set queue depth (nvme0n3) 00:40:03.238 Could not set queue depth (nvme0n4) 00:40:03.499 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:03.499 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:03.499 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:03.499 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:03.499 fio-3.35 00:40:03.499 Starting 4 threads 00:40:04.891 00:40:04.891 job0: (groupid=0, jobs=1): err= 0: pid=1539448: Sat Dec 7 10:14:33 2024 00:40:04.891 read: IOPS=22, BW=91.5KiB/s (93.7kB/s)(92.0KiB/1005msec) 00:40:04.891 slat (nsec): min=7533, max=23203, avg=21376.30, stdev=4003.39 00:40:04.891 clat (usec): min=324, max=42019, avg=39329.85, stdev=8510.38 00:40:04.891 lat (usec): min=334, max=42042, avg=39351.23, stdev=8512.86 00:40:04.891 clat percentiles (usec): 00:40:04.891 | 1.00th=[ 326], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:04.891 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:04.891 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:40:04.891 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:04.891 | 99.99th=[42206] 00:40:04.891 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:40:04.891 slat (nsec): min=9504, max=42721, avg=10576.11, stdev=1973.19 00:40:04.891 clat (usec): min=152, max=413, avg=182.39, stdev=22.06 00:40:04.891 lat (usec): min=162, max=456, avg=192.97, stdev=22.84 00:40:04.891 clat percentiles (usec): 00:40:04.891 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:40:04.891 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:40:04.891 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 241], 00:40:04.891 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 412], 99.95th=[ 412], 00:40:04.891 | 99.99th=[ 412] 00:40:04.891 bw ( KiB/s): min= 4096, max= 4096, per=25.28%, avg=4096.00, stdev= 0.00, samples=1 00:40:04.891 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:04.891 lat (usec) : 250=95.14%, 500=0.75% 00:40:04.891 lat (msec) : 50=4.11% 00:40:04.891 cpu : usr=0.40%, sys=0.30%, ctx=535, majf=0, minf=1 00:40:04.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:04.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.892 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:04.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:04.892 job1: (groupid=0, jobs=1): err= 0: pid=1539471: Sat Dec 7 10:14:33 2024 00:40:04.892 read: IOPS=22, BW=91.0KiB/s (93.2kB/s)(92.0KiB/1011msec) 00:40:04.892 slat (nsec): min=9408, max=26866, avg=22175.39, stdev=4045.12 00:40:04.892 clat (usec): min=396, max=42020, avg=39307.28, stdev=8490.08 00:40:04.892 lat (usec): min=423, max=42044, avg=39329.46, stdev=8489.10 00:40:04.892 clat percentiles (usec): 00:40:04.892 | 1.00th=[ 396], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:04.892 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:04.892 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:40:04.892 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:04.892 | 99.99th=[42206] 00:40:04.892 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:40:04.892 slat (nsec): min=10043, max=38464, avg=11236.97, stdev=2220.03 00:40:04.892 clat (usec): min=156, max=3202, avg=193.44, stdev=135.38 00:40:04.892 lat (usec): min=166, max=3213, avg=204.68, stdev=135.46 00:40:04.892 clat percentiles (usec): 00:40:04.892 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 174], 00:40:04.892 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:40:04.892 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 221], 00:40:04.892 | 99.00th=[ 277], 99.50th=[ 359], 99.90th=[ 3195], 99.95th=[ 3195], 00:40:04.892 | 99.99th=[ 3195] 00:40:04.892 bw ( KiB/s): min= 4096, max= 4096, per=25.28%, avg=4096.00, stdev= 0.00, samples=1 00:40:04.892 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:04.892 lat (usec) : 250=94.39%, 500=1.12%, 750=0.19% 00:40:04.892 lat (msec) : 4=0.19%, 50=4.11% 00:40:04.892 cpu : usr=0.30%, sys=0.50%, ctx=535, majf=0, minf=1 00:40:04.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:04.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.892 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:04.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:04.892 job2: (groupid=0, jobs=1): err= 0: pid=1539499: Sat Dec 7 10:14:33 2024 00:40:04.892 read: IOPS=2250, BW=9003KiB/s (9219kB/s)(9012KiB/1001msec) 00:40:04.892 slat (nsec): min=7285, max=39829, avg=8319.02, stdev=1152.06 00:40:04.892 clat (usec): min=191, max=298, avg=230.15, stdev=18.94 00:40:04.892 lat (usec): min=199, max=308, avg=238.47, stdev=18.94 00:40:04.892 clat percentiles (usec): 00:40:04.892 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:40:04.892 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 239], 60.00th=[ 243], 00:40:04.892 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 255], 00:40:04.892 | 99.00th=[ 265], 99.50th=[ 265], 99.90th=[ 281], 99.95th=[ 297], 00:40:04.892 | 99.99th=[ 297] 00:40:04.892 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:04.892 slat (nsec): min=10356, max=50906, avg=11973.81, stdev=3314.55 00:40:04.892 clat (usec): min=130, max=3610, avg=163.30, stdev=73.95 00:40:04.892 lat (usec): min=141, max=3625, avg=175.27, stdev=74.50 00:40:04.892 clat percentiles (usec): 00:40:04.892 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 145], 00:40:04.892 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 153], 00:40:04.892 | 70.00th=[ 157], 80.00th=[ 176], 90.00th=[ 204], 95.00th=[ 225], 00:40:04.892 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 355], 99.95th=[ 392], 00:40:04.892 | 99.99th=[ 3621] 00:40:04.892 bw ( KiB/s): min=10680, max=10680, per=65.90%, avg=10680.00, stdev= 0.00, samples=1 00:40:04.892 iops : min= 2670, max= 2670, avg=2670.00, stdev= 0.00, samples=1 00:40:04.892 lat (usec) : 250=92.75%, 500=7.23% 00:40:04.892 lat (msec) : 4=0.02% 00:40:04.892 cpu : usr=3.60%, sys=8.10%, ctx=4813, majf=0, minf=1 00:40:04.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:04.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.892 issued rwts: total=2253,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:04.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:04.892 job3: (groupid=0, jobs=1): err= 0: pid=1539510: Sat Dec 7 10:14:33 2024 00:40:04.892 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:40:04.892 slat (nsec): min=10025, max=23784, avg=22617.68, stdev=2838.87 00:40:04.892 clat (usec): min=17605, max=41965, avg=40037.50, stdev=5021.95 00:40:04.892 lat (usec): min=17627, max=41988, avg=40060.12, stdev=5022.13 00:40:04.892 clat percentiles (usec): 00:40:04.892 | 1.00th=[17695], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:04.892 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:04.892 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:40:04.892 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:04.892 | 99.99th=[42206] 00:40:04.892 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:40:04.892 slat (nsec): min=9646, max=40186, avg=11065.65, stdev=1927.97 00:40:04.892 clat (usec): min=171, max=492, avg=221.73, stdev=26.83 00:40:04.892 lat (usec): min=181, max=532, avg=232.79, stdev=27.62 00:40:04.892 clat percentiles (usec): 00:40:04.892 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 200], 00:40:04.892 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:40:04.892 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 260], 00:40:04.892 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 494], 99.95th=[ 494], 00:40:04.892 | 99.99th=[ 494] 00:40:04.892 bw ( KiB/s): min= 4096, max= 4096, per=25.28%, avg=4096.00, stdev= 0.00, samples=1 00:40:04.892 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:04.892 lat (usec) : 250=85.21%, 500=10.67% 00:40:04.892 lat (msec) : 20=0.19%, 50=3.93% 00:40:04.892 cpu : usr=0.00%, sys=0.80%, ctx=534, majf=0, minf=1 00:40:04.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:04.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:04.892 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:04.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:04.892 00:40:04.892 Run status group 0 (all jobs): 00:40:04.892 READ: bw=9183KiB/s (9403kB/s), 87.8KiB/s-9003KiB/s (89.9kB/s-9219kB/s), io=9284KiB (9507kB), run=1001-1011msec 00:40:04.892 WRITE: bw=15.8MiB/s (16.6MB/s), 2026KiB/s-9.99MiB/s (2074kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1011msec 00:40:04.892 00:40:04.892 Disk stats (read/write): 00:40:04.892 nvme0n1: ios=67/512, merge=0/0, ticks=729/91, in_queue=820, util=81.96% 00:40:04.892 nvme0n2: ios=67/512, merge=0/0, ticks=728/95, in_queue=823, util=85.67% 00:40:04.892 nvme0n3: ios=1828/2048, merge=0/0, ticks=493/306, in_queue=799, util=89.87% 00:40:04.892 nvme0n4: ios=73/512, merge=0/0, ticks=737/110, in_queue=847, util=94.34% 00:40:04.892 10:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:04.892 [global] 00:40:04.892 thread=1 00:40:04.892 invalidate=1 00:40:04.892 rw=randwrite 00:40:04.892 time_based=1 00:40:04.892 runtime=1 00:40:04.892 ioengine=libaio 00:40:04.892 direct=1 00:40:04.892 bs=4096 00:40:04.892 iodepth=1 00:40:04.892 norandommap=0 00:40:04.892 numjobs=1 00:40:04.892 00:40:04.892 verify_dump=1 00:40:04.892 verify_backlog=512 00:40:04.892 verify_state_save=0 00:40:04.892 do_verify=1 00:40:04.892 verify=crc32c-intel 00:40:04.892 [job0] 00:40:04.892 filename=/dev/nvme0n1 00:40:04.892 [job1] 00:40:04.892 filename=/dev/nvme0n2 00:40:04.892 [job2] 00:40:04.892 filename=/dev/nvme0n3 00:40:04.892 [job3] 00:40:04.892 filename=/dev/nvme0n4 00:40:04.892 Could not set queue depth (nvme0n1) 00:40:04.892 Could not set queue depth (nvme0n2) 00:40:04.892 Could not set queue depth (nvme0n3) 00:40:04.892 Could not set queue depth (nvme0n4) 00:40:05.149 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:05.149 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:05.150 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:05.150 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:05.150 fio-3.35 00:40:05.150 Starting 4 threads 00:40:06.524 00:40:06.524 job0: (groupid=0, jobs=1): err= 0: pid=1539887: Sat Dec 7 10:14:34 2024 00:40:06.524 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:40:06.524 slat (nsec): min=9064, max=25852, avg=18763.68, stdev=6486.40 00:40:06.524 clat (usec): min=40876, max=41110, avg=40973.70, stdev=62.13 00:40:06.524 lat (usec): min=40886, max=41121, avg=40992.46, stdev=59.58 00:40:06.524 clat percentiles (usec): 00:40:06.524 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:06.524 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:06.524 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:06.524 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:06.524 | 99.99th=[41157] 00:40:06.524 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:40:06.524 slat (nsec): min=9551, max=44262, avg=10976.63, stdev=1774.88 00:40:06.524 clat (usec): min=144, max=441, avg=228.38, stdev=26.62 00:40:06.524 lat (usec): min=155, max=486, avg=239.35, stdev=27.29 00:40:06.524 clat percentiles (usec): 00:40:06.524 | 1.00th=[ 167], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 212], 00:40:06.524 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 231], 00:40:06.524 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 281], 00:40:06.524 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 441], 99.95th=[ 441], 00:40:06.524 | 99.99th=[ 441] 00:40:06.524 bw ( KiB/s): min= 4096, max= 4096, per=20.04%, avg=4096.00, stdev= 0.00, samples=1 00:40:06.524 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:06.524 lat (usec) : 250=82.96%, 500=12.92% 00:40:06.524 lat (msec) : 50=4.12% 00:40:06.524 cpu : usr=0.19%, sys=0.58%, ctx=536, majf=0, minf=1 00:40:06.524 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:06.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.524 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.524 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:06.524 job1: (groupid=0, jobs=1): err= 0: pid=1539900: Sat Dec 7 10:14:34 2024 00:40:06.524 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:40:06.524 slat (nsec): min=9659, max=24724, avg=22686.64, stdev=2988.84 00:40:06.524 clat (usec): min=40831, max=41340, avg=40986.10, stdev=98.06 00:40:06.524 lat (usec): min=40856, max=41350, avg=41008.79, stdev=95.69 00:40:06.524 clat percentiles (usec): 00:40:06.524 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:06.524 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:06.524 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:06.524 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:06.524 | 99.99th=[41157] 00:40:06.524 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:40:06.524 slat (nsec): min=10586, max=37738, avg=12035.80, stdev=1820.84 00:40:06.524 clat (usec): min=154, max=311, avg=229.13, stdev=25.58 00:40:06.524 lat (usec): min=165, max=324, avg=241.17, stdev=25.71 00:40:06.524 clat percentiles (usec): 00:40:06.524 | 1.00th=[ 157], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:40:06.524 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:40:06.524 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 281], 00:40:06.524 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 314], 99.95th=[ 314], 00:40:06.524 | 99.99th=[ 314] 00:40:06.524 bw ( KiB/s): min= 4096, max= 4096, per=20.04%, avg=4096.00, stdev= 0.00, samples=1 00:40:06.524 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:06.524 lat (usec) : 250=79.96%, 500=15.92% 00:40:06.524 lat (msec) : 50=4.12% 00:40:06.524 cpu : usr=0.49%, sys=0.88%, ctx=535, majf=0, minf=1 00:40:06.526 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:06.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.526 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.526 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:06.526 job2: (groupid=0, jobs=1): err= 0: pid=1539917: Sat Dec 7 10:14:34 2024 00:40:06.526 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:06.526 slat (nsec): min=6727, max=26118, avg=7735.51, stdev=1209.63 00:40:06.526 clat (usec): min=207, max=567, avg=264.29, stdev=44.32 00:40:06.526 lat (usec): min=214, max=574, avg=272.02, stdev=44.75 00:40:06.526 clat percentiles (usec): 00:40:06.526 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 245], 00:40:06.526 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:40:06.526 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 343], 00:40:06.526 | 99.00th=[ 502], 99.50th=[ 502], 99.90th=[ 519], 99.95th=[ 529], 00:40:06.526 | 99.99th=[ 570] 00:40:06.526 write: IOPS=2184, BW=8739KiB/s (8949kB/s)(8748KiB/1001msec); 0 zone resets 00:40:06.526 slat (nsec): min=9553, max=39300, avg=10694.02, stdev=1490.99 00:40:06.526 clat (usec): min=146, max=407, avg=186.98, stdev=26.53 00:40:06.526 lat (usec): min=157, max=446, avg=197.67, stdev=26.69 00:40:06.526 clat percentiles (usec): 00:40:06.526 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:40:06.526 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:40:06.526 | 70.00th=[ 190], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 237], 00:40:06.526 | 99.00th=[ 260], 99.50th=[ 281], 99.90th=[ 334], 99.95th=[ 359], 00:40:06.526 | 99.99th=[ 408] 00:40:06.526 bw ( KiB/s): min= 8560, max= 8560, per=41.87%, avg=8560.00, stdev= 0.00, samples=1 00:40:06.526 iops : min= 2140, max= 2140, avg=2140.00, stdev= 0.00, samples=1 00:40:06.526 lat (usec) : 250=69.68%, 500=29.85%, 750=0.47% 00:40:06.526 cpu : usr=2.80%, sys=3.60%, ctx=4236, majf=0, minf=1 00:40:06.526 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:06.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.526 issued rwts: total=2048,2187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.526 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:06.526 job3: (groupid=0, jobs=1): err= 0: pid=1539922: Sat Dec 7 10:14:34 2024 00:40:06.526 read: IOPS=1832, BW=7329KiB/s (7505kB/s)(7336KiB/1001msec) 00:40:06.526 slat (nsec): min=6747, max=23652, avg=7759.64, stdev=871.65 00:40:06.526 clat (usec): min=226, max=648, avg=301.90, stdev=59.97 00:40:06.526 lat (usec): min=233, max=655, avg=309.66, stdev=59.97 00:40:06.526 clat percentiles (usec): 00:40:06.526 | 1.00th=[ 247], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 273], 00:40:06.526 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 281], 60.00th=[ 285], 00:40:06.526 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 388], 95.00th=[ 474], 00:40:06.526 | 99.00th=[ 502], 99.50th=[ 506], 99.90th=[ 515], 99.95th=[ 652], 00:40:06.526 | 99.99th=[ 652] 00:40:06.526 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:40:06.526 slat (nsec): min=9290, max=41816, avg=10896.50, stdev=1745.58 00:40:06.526 clat (usec): min=161, max=386, avg=196.06, stdev=13.58 00:40:06.526 lat (usec): min=171, max=414, avg=206.95, stdev=14.30 00:40:06.526 clat percentiles (usec): 00:40:06.526 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:40:06.526 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 198], 00:40:06.526 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 210], 95.00th=[ 217], 00:40:06.526 | 99.00th=[ 229], 99.50th=[ 235], 99.90th=[ 297], 99.95th=[ 375], 00:40:06.526 | 99.99th=[ 388] 00:40:06.526 bw ( KiB/s): min= 8192, max= 8192, per=40.07%, avg=8192.00, stdev= 0.00, samples=1 00:40:06.526 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:40:06.526 lat (usec) : 250=53.48%, 500=45.96%, 750=0.57% 00:40:06.526 cpu : usr=2.40%, sys=3.30%, ctx=3882, majf=0, minf=1 00:40:06.526 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:06.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.526 issued rwts: total=1834,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.526 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:06.526 00:40:06.526 Run status group 0 (all jobs): 00:40:06.526 READ: bw=14.9MiB/s (15.6MB/s), 85.5KiB/s-8184KiB/s (87.6kB/s-8380kB/s), io=15.3MiB (16.1MB), run=1001-1029msec 00:40:06.526 WRITE: bw=20.0MiB/s (20.9MB/s), 1990KiB/s-8739KiB/s (2038kB/s-8949kB/s), io=20.5MiB (21.5MB), run=1001-1029msec 00:40:06.526 00:40:06.526 Disk stats (read/write): 00:40:06.526 nvme0n1: ios=50/512, merge=0/0, ticks=1737/119, in_queue=1856, util=99.00% 00:40:06.526 nvme0n2: ios=42/512, merge=0/0, ticks=1684/103, in_queue=1787, util=98.27% 00:40:06.526 nvme0n3: ios=1694/2048, merge=0/0, ticks=1371/367, in_queue=1738, util=99.06% 00:40:06.526 nvme0n4: ios=1536/1828, merge=0/0, ticks=447/350, in_queue=797, util=89.71% 00:40:06.526 10:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:06.526 [global] 00:40:06.526 thread=1 00:40:06.526 invalidate=1 00:40:06.526 rw=write 00:40:06.526 time_based=1 00:40:06.526 runtime=1 00:40:06.526 ioengine=libaio 00:40:06.526 direct=1 00:40:06.526 bs=4096 00:40:06.526 iodepth=128 00:40:06.526 norandommap=0 00:40:06.526 numjobs=1 00:40:06.526 00:40:06.526 verify_dump=1 00:40:06.526 verify_backlog=512 00:40:06.526 verify_state_save=0 00:40:06.526 do_verify=1 00:40:06.526 verify=crc32c-intel 00:40:06.526 [job0] 00:40:06.526 filename=/dev/nvme0n1 00:40:06.527 [job1] 00:40:06.527 filename=/dev/nvme0n2 00:40:06.527 [job2] 00:40:06.527 filename=/dev/nvme0n3 00:40:06.527 [job3] 00:40:06.527 filename=/dev/nvme0n4 00:40:06.527 Could not set queue depth (nvme0n1) 00:40:06.527 Could not set queue depth (nvme0n2) 00:40:06.527 Could not set queue depth (nvme0n3) 00:40:06.527 Could not set queue depth (nvme0n4) 00:40:06.527 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:06.527 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:06.527 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:06.527 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:06.527 fio-3.35 00:40:06.527 Starting 4 threads 00:40:07.898 00:40:07.898 job0: (groupid=0, jobs=1): err= 0: pid=1540294: Sat Dec 7 10:14:36 2024 00:40:07.898 read: IOPS=2768, BW=10.8MiB/s (11.3MB/s)(11.0MiB/1017msec) 00:40:07.898 slat (usec): min=2, max=14962, avg=164.89, stdev=1073.65 00:40:07.898 clat (msec): min=5, max=105, avg=17.90, stdev=14.94 00:40:07.898 lat (msec): min=5, max=105, avg=18.06, stdev=15.11 00:40:07.898 clat percentiles (msec): 00:40:07.898 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:40:07.898 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:40:07.898 | 70.00th=[ 19], 80.00th=[ 21], 90.00th=[ 27], 95.00th=[ 47], 00:40:07.898 | 99.00th=[ 92], 99.50th=[ 101], 99.90th=[ 106], 99.95th=[ 106], 00:40:07.898 | 99.99th=[ 106] 00:40:07.898 write: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec); 0 zone resets 00:40:07.898 slat (usec): min=3, max=13315, avg=171.10, stdev=970.78 00:40:07.898 clat (usec): min=1735, max=105792, avg=25474.34, stdev=25562.10 00:40:07.898 lat (usec): min=1747, max=105810, avg=25645.45, stdev=25704.21 00:40:07.898 clat percentiles (msec): 00:40:07.898 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:40:07.898 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 15], 60.00th=[ 16], 00:40:07.898 | 70.00th=[ 22], 80.00th=[ 37], 90.00th=[ 81], 95.00th=[ 91], 00:40:07.898 | 99.00th=[ 101], 99.50th=[ 102], 99.90th=[ 106], 99.95th=[ 106], 00:40:07.898 | 99.99th=[ 106] 00:40:07.898 bw ( KiB/s): min=11145, max=13408, per=19.74%, avg=12276.50, stdev=1600.18, samples=2 00:40:07.898 iops : min= 2786, max= 3352, avg=3069.00, stdev=400.22, samples=2 00:40:07.898 lat (msec) : 2=0.27%, 10=21.77%, 20=51.29%, 50=17.20%, 100=8.73% 00:40:07.898 lat (msec) : 250=0.73% 00:40:07.898 cpu : usr=2.76%, sys=3.54%, ctx=232, majf=0, minf=1 00:40:07.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:40:07.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:07.898 issued rwts: total=2816,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:07.898 job1: (groupid=0, jobs=1): err= 0: pid=1540295: Sat Dec 7 10:14:36 2024 00:40:07.898 read: IOPS=3363, BW=13.1MiB/s (13.8MB/s)(13.3MiB/1016msec) 00:40:07.898 slat (nsec): min=1076, max=29285k, avg=105579.55, stdev=930648.30 00:40:07.898 clat (usec): min=5308, max=51517, avg=15803.79, stdev=8012.59 00:40:07.898 lat (usec): min=5638, max=51544, avg=15909.37, stdev=8086.74 00:40:07.898 clat percentiles (usec): 00:40:07.898 | 1.00th=[ 5669], 5.00th=[ 6587], 10.00th=[ 7373], 20.00th=[ 9110], 00:40:07.898 | 30.00th=[10814], 40.00th=[11731], 50.00th=[13435], 60.00th=[15533], 00:40:07.898 | 70.00th=[18744], 80.00th=[22152], 90.00th=[27657], 95.00th=[31851], 00:40:07.898 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:40:07.898 | 99.99th=[51643] 00:40:07.898 write: IOPS=3527, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1016msec); 0 zone resets 00:40:07.898 slat (nsec): min=1943, max=14605k, avg=117879.25, stdev=856265.74 00:40:07.898 clat (usec): min=1265, max=82721, avg=20936.75, stdev=16792.30 00:40:07.898 lat (usec): min=1276, max=82723, avg=21054.63, stdev=16841.54 00:40:07.898 clat percentiles (usec): 00:40:07.898 | 1.00th=[ 2409], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 8455], 00:40:07.898 | 30.00th=[10290], 40.00th=[11863], 50.00th=[13435], 60.00th=[17433], 00:40:07.898 | 70.00th=[22938], 80.00th=[35390], 90.00th=[41157], 95.00th=[58459], 00:40:07.898 | 99.00th=[80217], 99.50th=[81265], 99.90th=[82314], 99.95th=[82314], 00:40:07.898 | 99.99th=[82314] 00:40:07.898 bw ( KiB/s): min=12120, max=16552, per=23.05%, avg=14336.00, stdev=3133.90, samples=2 00:40:07.898 iops : min= 3030, max= 4138, avg=3584.00, stdev=783.47, samples=2 00:40:07.898 lat (msec) : 2=0.23%, 4=0.94%, 10=25.44%, 20=42.17%, 50=27.30% 00:40:07.898 lat (msec) : 100=3.93% 00:40:07.898 cpu : usr=2.66%, sys=3.55%, ctx=241, majf=0, minf=2 00:40:07.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:40:07.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:07.898 issued rwts: total=3417,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:07.898 job2: (groupid=0, jobs=1): err= 0: pid=1540297: Sat Dec 7 10:14:36 2024 00:40:07.898 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:40:07.898 slat (nsec): min=1126, max=28387k, avg=143406.33, stdev=1147501.57 00:40:07.898 clat (usec): min=2533, max=60328, avg=19017.56, stdev=12829.77 00:40:07.898 lat (usec): min=2549, max=70618, avg=19160.96, stdev=12943.94 00:40:07.898 clat percentiles (usec): 00:40:07.898 | 1.00th=[ 3261], 5.00th=[ 5538], 10.00th=[ 7898], 20.00th=[ 8586], 00:40:07.898 | 30.00th=[10028], 40.00th=[11338], 50.00th=[14222], 60.00th=[18220], 00:40:07.898 | 70.00th=[21890], 80.00th=[29754], 90.00th=[42206], 95.00th=[45351], 00:40:07.898 | 99.00th=[51643], 99.50th=[54789], 99.90th=[58459], 99.95th=[58459], 00:40:07.898 | 99.99th=[60556] 00:40:07.898 write: IOPS=3189, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1015msec); 0 zone resets 00:40:07.898 slat (usec): min=2, max=25901, avg=158.17, stdev=1310.87 00:40:07.898 clat (usec): min=1763, max=93926, avg=21739.74, stdev=20877.89 00:40:07.898 lat (usec): min=1803, max=93951, avg=21897.92, stdev=21035.76 00:40:07.898 clat percentiles (usec): 00:40:07.898 | 1.00th=[ 2868], 5.00th=[ 5276], 10.00th=[ 6849], 20.00th=[ 8979], 00:40:07.898 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[12387], 60.00th=[15008], 00:40:07.898 | 70.00th=[16319], 80.00th=[45876], 90.00th=[57410], 95.00th=[69731], 00:40:07.898 | 99.00th=[82314], 99.50th=[84411], 99.90th=[88605], 99.95th=[88605], 00:40:07.898 | 99.99th=[93848] 00:40:07.898 bw ( KiB/s): min=12288, max=12592, per=20.00%, avg=12440.00, stdev=214.96, samples=2 00:40:07.898 iops : min= 3072, max= 3148, avg=3110.00, stdev=53.74, samples=2 00:40:07.898 lat (msec) : 2=0.02%, 4=2.25%, 10=31.32%, 20=35.55%, 50=21.27% 00:40:07.898 lat (msec) : 100=9.59% 00:40:07.898 cpu : usr=1.97%, sys=3.75%, ctx=218, majf=0, minf=1 00:40:07.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:40:07.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:07.898 issued rwts: total=3072,3237,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:07.898 job3: (groupid=0, jobs=1): err= 0: pid=1540298: Sat Dec 7 10:14:36 2024 00:40:07.898 read: IOPS=5537, BW=21.6MiB/s (22.7MB/s)(22.0MiB/1017msec) 00:40:07.898 slat (nsec): min=1343, max=9872.9k, avg=85539.12, stdev=617834.24 00:40:07.898 clat (usec): min=2626, max=26794, avg=10651.06, stdev=3001.17 00:40:07.898 lat (usec): min=2632, max=26841, avg=10736.60, stdev=3037.95 00:40:07.898 clat percentiles (usec): 00:40:07.898 | 1.00th=[ 4817], 5.00th=[ 6587], 10.00th=[ 7439], 20.00th=[ 8455], 00:40:07.898 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[10814], 00:40:07.898 | 70.00th=[11469], 80.00th=[13042], 90.00th=[15008], 95.00th=[17171], 00:40:07.898 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19530], 99.95th=[25822], 00:40:07.898 | 99.99th=[26870] 00:40:07.898 write: IOPS=5821, BW=22.7MiB/s (23.8MB/s)(23.1MiB/1017msec); 0 zone resets 00:40:07.898 slat (usec): min=2, max=27623, avg=84.55, stdev=747.21 00:40:07.898 clat (usec): min=1544, max=86886, avg=10692.77, stdev=10308.54 00:40:07.898 lat (usec): min=1568, max=86890, avg=10777.33, stdev=10359.24 00:40:07.898 clat percentiles (usec): 00:40:07.898 | 1.00th=[ 3032], 5.00th=[ 5145], 10.00th=[ 6521], 20.00th=[ 7701], 00:40:07.899 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:40:07.899 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11600], 95.00th=[13435], 00:40:07.899 | 99.00th=[82314], 99.50th=[84411], 99.90th=[86508], 99.95th=[86508], 00:40:07.899 | 99.99th=[86508] 00:40:07.899 bw ( KiB/s): min=21760, max=24576, per=37.25%, avg=23168.00, stdev=1991.21, samples=2 00:40:07.899 iops : min= 5440, max= 6144, avg=5792.00, stdev=497.80, samples=2 00:40:07.899 lat (msec) : 2=0.02%, 4=1.83%, 10=62.10%, 20=34.70%, 50=0.32% 00:40:07.899 lat (msec) : 100=1.03% 00:40:07.899 cpu : usr=3.64%, sys=5.61%, ctx=614, majf=0, minf=1 00:40:07.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:07.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:07.899 issued rwts: total=5632,5920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:07.899 00:40:07.899 Run status group 0 (all jobs): 00:40:07.899 READ: bw=57.4MiB/s (60.2MB/s), 10.8MiB/s-21.6MiB/s (11.3MB/s-22.7MB/s), io=58.3MiB (61.2MB), run=1015-1017msec 00:40:07.899 WRITE: bw=60.7MiB/s (63.7MB/s), 11.8MiB/s-22.7MiB/s (12.4MB/s-23.8MB/s), io=61.8MiB (64.8MB), run=1015-1017msec 00:40:07.899 00:40:07.899 Disk stats (read/write): 00:40:07.899 nvme0n1: ios=2067/2400, merge=0/0, ticks=38484/66537, in_queue=105021, util=98.10% 00:40:07.899 nvme0n2: ios=3065/3079, merge=0/0, ticks=44310/56291, in_queue=100601, util=91.01% 00:40:07.899 nvme0n3: ios=2071/2551, merge=0/0, ticks=20065/23407, in_queue=43472, util=93.40% 00:40:07.899 nvme0n4: ios=5142/5391, merge=0/0, ticks=50898/48343, in_queue=99241, util=100.00% 00:40:07.899 10:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:07.899 [global] 00:40:07.899 thread=1 00:40:07.899 invalidate=1 00:40:07.899 rw=randwrite 00:40:07.899 time_based=1 00:40:07.899 runtime=1 00:40:07.899 ioengine=libaio 00:40:07.899 direct=1 00:40:07.899 bs=4096 00:40:07.899 iodepth=128 00:40:07.899 norandommap=0 00:40:07.899 numjobs=1 00:40:07.899 00:40:07.899 verify_dump=1 00:40:07.899 verify_backlog=512 00:40:07.899 verify_state_save=0 00:40:07.899 do_verify=1 00:40:07.899 verify=crc32c-intel 00:40:07.899 [job0] 00:40:07.899 filename=/dev/nvme0n1 00:40:07.899 [job1] 00:40:07.899 filename=/dev/nvme0n2 00:40:07.899 [job2] 00:40:07.899 filename=/dev/nvme0n3 00:40:07.899 [job3] 00:40:07.899 filename=/dev/nvme0n4 00:40:07.899 Could not set queue depth (nvme0n1) 00:40:07.899 Could not set queue depth (nvme0n2) 00:40:07.899 Could not set queue depth (nvme0n3) 00:40:07.899 Could not set queue depth (nvme0n4) 00:40:08.155 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:08.155 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:08.155 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:08.155 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:08.155 fio-3.35 00:40:08.155 Starting 4 threads 00:40:09.525 00:40:09.525 job0: (groupid=0, jobs=1): err= 0: pid=1540668: Sat Dec 7 10:14:38 2024 00:40:09.525 read: IOPS=5926, BW=23.2MiB/s (24.3MB/s)(23.4MiB/1009msec) 00:40:09.525 slat (nsec): min=1260, max=9372.6k, avg=85513.32, stdev=604583.11 00:40:09.525 clat (usec): min=2924, max=21518, avg=10583.70, stdev=2619.89 00:40:09.525 lat (usec): min=2930, max=21525, avg=10669.21, stdev=2655.97 00:40:09.525 clat percentiles (usec): 00:40:09.525 | 1.00th=[ 4015], 5.00th=[ 7177], 10.00th=[ 7767], 20.00th=[ 9241], 00:40:09.525 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10290], 00:40:09.525 | 70.00th=[10814], 80.00th=[12649], 90.00th=[14353], 95.00th=[16057], 00:40:09.525 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268], 00:40:09.525 | 99.99th=[21627] 00:40:09.525 write: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec); 0 zone resets 00:40:09.525 slat (usec): min=2, max=46339, avg=75.33, stdev=686.84 00:40:09.525 clat (usec): min=1331, max=59207, avg=10510.36, stdev=7088.78 00:40:09.525 lat (usec): min=1343, max=59231, avg=10585.69, stdev=7112.67 00:40:09.525 clat percentiles (usec): 00:40:09.525 | 1.00th=[ 2966], 5.00th=[ 4817], 10.00th=[ 6259], 20.00th=[ 8291], 00:40:09.525 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10290], 00:40:09.525 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10945], 95.00th=[14091], 00:40:09.525 | 99.00th=[55313], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:40:09.525 | 99.99th=[58983] 00:40:09.525 bw ( KiB/s): min=24576, max=24576, per=35.44%, avg=24576.00, stdev= 0.00, samples=2 00:40:09.525 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:40:09.525 lat (msec) : 2=0.16%, 4=1.81%, 10=47.62%, 20=49.36%, 50=0.04% 00:40:09.525 lat (msec) : 100=1.01% 00:40:09.525 cpu : usr=2.78%, sys=6.65%, ctx=709, majf=0, minf=1 00:40:09.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:09.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:09.525 issued rwts: total=5980,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:09.525 job1: (groupid=0, jobs=1): err= 0: pid=1540669: Sat Dec 7 10:14:38 2024 00:40:09.525 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:40:09.525 slat (nsec): min=1353, max=5963.0k, avg=110573.54, stdev=560870.89 00:40:09.525 clat (usec): min=8921, max=22494, avg=13130.64, stdev=1989.98 00:40:09.525 lat (usec): min=8931, max=22498, avg=13241.21, stdev=2060.34 00:40:09.525 clat percentiles (usec): 00:40:09.525 | 1.00th=[ 8979], 5.00th=[10159], 10.00th=[10945], 20.00th=[12256], 00:40:09.525 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:40:09.525 | 70.00th=[13304], 80.00th=[13566], 90.00th=[15270], 95.00th=[17171], 00:40:09.525 | 99.00th=[21103], 99.50th=[21365], 99.90th=[22414], 99.95th=[22414], 00:40:09.525 | 99.99th=[22414] 00:40:09.525 write: IOPS=2977, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1010msec); 0 zone resets 00:40:09.525 slat (usec): min=2, max=25494, avg=232.41, stdev=1210.40 00:40:09.525 clat (usec): min=5420, max=65801, avg=31341.16, stdev=15072.60 00:40:09.525 lat (usec): min=5435, max=65831, avg=31573.57, stdev=15144.80 00:40:09.525 clat percentiles (usec): 00:40:09.525 | 1.00th=[11863], 5.00th=[14615], 10.00th=[18220], 20.00th=[20841], 00:40:09.525 | 30.00th=[21627], 40.00th=[22152], 50.00th=[22414], 60.00th=[30540], 00:40:09.525 | 70.00th=[35390], 80.00th=[47973], 90.00th=[60556], 95.00th=[61604], 00:40:09.525 | 99.00th=[63177], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:40:09.525 | 99.99th=[65799] 00:40:09.525 bw ( KiB/s): min=10752, max=12288, per=16.61%, avg=11520.00, stdev=1086.12, samples=2 00:40:09.525 iops : min= 2688, max= 3072, avg=2880.00, stdev=271.53, samples=2 00:40:09.525 lat (msec) : 10=1.85%, 20=51.19%, 50=37.58%, 100=9.38% 00:40:09.525 cpu : usr=2.48%, sys=3.47%, ctx=398, majf=0, minf=2 00:40:09.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:40:09.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:09.526 issued rwts: total=2560,3007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:09.526 job2: (groupid=0, jobs=1): err= 0: pid=1540670: Sat Dec 7 10:14:38 2024 00:40:09.526 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:40:09.526 slat (nsec): min=1343, max=11956k, avg=133369.89, stdev=841028.78 00:40:09.526 clat (usec): min=4371, max=40906, avg=15048.03, stdev=5247.77 00:40:09.526 lat (usec): min=4377, max=40913, avg=15181.40, stdev=5311.22 00:40:09.526 clat percentiles (usec): 00:40:09.526 | 1.00th=[ 5145], 5.00th=[ 8717], 10.00th=[11731], 20.00th=[11994], 00:40:09.526 | 30.00th=[12125], 40.00th=[12649], 50.00th=[12911], 60.00th=[13304], 00:40:09.526 | 70.00th=[16319], 80.00th=[17957], 90.00th=[22152], 95.00th=[27132], 00:40:09.526 | 99.00th=[32113], 99.50th=[34341], 99.90th=[41157], 99.95th=[41157], 00:40:09.526 | 99.99th=[41157] 00:40:09.526 write: IOPS=3393, BW=13.3MiB/s (13.9MB/s)(13.4MiB/1013msec); 0 zone resets 00:40:09.526 slat (usec): min=2, max=12729, avg=165.93, stdev=674.64 00:40:09.526 clat (usec): min=3191, max=52256, avg=23900.27, stdev=11302.33 00:40:09.526 lat (usec): min=3201, max=52263, avg=24066.20, stdev=11379.28 00:40:09.526 clat percentiles (usec): 00:40:09.526 | 1.00th=[ 4948], 5.00th=[ 8848], 10.00th=[11338], 20.00th=[13960], 00:40:09.526 | 30.00th=[16450], 40.00th=[20841], 50.00th=[21890], 60.00th=[22152], 00:40:09.526 | 70.00th=[27132], 80.00th=[34341], 90.00th=[41681], 95.00th=[46924], 00:40:09.526 | 99.00th=[51119], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:40:09.526 | 99.99th=[52167] 00:40:09.526 bw ( KiB/s): min=12488, max=14000, per=19.10%, avg=13244.00, stdev=1069.15, samples=2 00:40:09.526 iops : min= 3122, max= 3500, avg=3311.00, stdev=267.29, samples=2 00:40:09.526 lat (msec) : 4=0.28%, 10=6.18%, 20=53.18%, 50=39.49%, 100=0.88% 00:40:09.526 cpu : usr=2.77%, sys=4.05%, ctx=421, majf=0, minf=1 00:40:09.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:40:09.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:09.526 issued rwts: total=3072,3438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:09.526 job3: (groupid=0, jobs=1): err= 0: pid=1540671: Sat Dec 7 10:14:38 2024 00:40:09.526 read: IOPS=5029, BW=19.6MiB/s (20.6MB/s)(20.6MiB/1051msec) 00:40:09.526 slat (nsec): min=1678, max=9202.5k, avg=94335.34, stdev=528711.06 00:40:09.526 clat (msec): min=3, max=121, avg=12.52, stdev= 8.09 00:40:09.526 lat (msec): min=3, max=121, avg=12.62, stdev= 8.12 00:40:09.526 clat percentiles (msec): 00:40:09.526 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:40:09.526 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:40:09.526 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 15], 00:40:09.526 | 99.00th=[ 53], 99.50th=[ 59], 99.90th=[ 113], 99.95th=[ 113], 00:40:09.526 | 99.99th=[ 122] 00:40:09.526 write: IOPS=5358, BW=20.9MiB/s (21.9MB/s)(22.0MiB/1051msec); 0 zone resets 00:40:09.526 slat (usec): min=2, max=13812, avg=84.30, stdev=521.13 00:40:09.526 clat (usec): min=1754, max=72229, avg=11877.66, stdev=5826.52 00:40:09.526 lat (usec): min=1766, max=72236, avg=11961.96, stdev=5850.55 00:40:09.526 clat percentiles (usec): 00:40:09.526 | 1.00th=[ 5342], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 9372], 00:40:09.526 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11469], 60.00th=[11600], 00:40:09.526 | 70.00th=[11731], 80.00th=[11994], 90.00th=[14615], 95.00th=[15270], 00:40:09.526 | 99.00th=[41681], 99.50th=[51643], 99.90th=[71828], 99.95th=[71828], 00:40:09.526 | 99.99th=[71828] 00:40:09.526 bw ( KiB/s): min=20624, max=24432, per=32.49%, avg=22528.00, stdev=2692.66, samples=2 00:40:09.526 iops : min= 5156, max= 6108, avg=5632.00, stdev=673.17, samples=2 00:40:09.526 lat (msec) : 2=0.09%, 4=0.15%, 10=24.82%, 20=71.91%, 50=1.48% 00:40:09.526 lat (msec) : 100=1.46%, 250=0.09% 00:40:09.526 cpu : usr=3.62%, sys=6.76%, ctx=563, majf=0, minf=1 00:40:09.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:40:09.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:09.526 issued rwts: total=5286,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:09.526 00:40:09.526 Run status group 0 (all jobs): 00:40:09.526 READ: bw=62.8MiB/s (65.9MB/s), 9.90MiB/s-23.2MiB/s (10.4MB/s-24.3MB/s), io=66.0MiB (69.2MB), run=1009-1051msec 00:40:09.526 WRITE: bw=67.7MiB/s (71.0MB/s), 11.6MiB/s-23.8MiB/s (12.2MB/s-24.9MB/s), io=71.2MiB (74.6MB), run=1009-1051msec 00:40:09.526 00:40:09.526 Disk stats (read/write): 00:40:09.526 nvme0n1: ios=5005/5120, merge=0/0, ticks=51609/48071, in_queue=99680, util=89.78% 00:40:09.526 nvme0n2: ios=2179/2560, merge=0/0, ticks=14346/38656, in_queue=53002, util=87.72% 00:40:09.526 nvme0n3: ios=2583/2887, merge=0/0, ticks=38257/66451, in_queue=104708, util=93.75% 00:40:09.526 nvme0n4: ios=4484/4608, merge=0/0, ticks=21702/23675, in_queue=45377, util=98.11% 00:40:09.526 10:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:09.526 10:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1540803 00:40:09.526 10:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:09.526 10:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:09.526 [global] 00:40:09.526 thread=1 00:40:09.526 invalidate=1 00:40:09.526 rw=read 00:40:09.526 time_based=1 00:40:09.526 runtime=10 00:40:09.526 ioengine=libaio 00:40:09.526 direct=1 00:40:09.526 bs=4096 00:40:09.526 iodepth=1 00:40:09.526 norandommap=1 00:40:09.526 numjobs=1 00:40:09.526 00:40:09.526 [job0] 00:40:09.526 filename=/dev/nvme0n1 00:40:09.526 [job1] 00:40:09.526 filename=/dev/nvme0n2 00:40:09.526 [job2] 00:40:09.526 filename=/dev/nvme0n3 00:40:09.526 [job3] 00:40:09.526 filename=/dev/nvme0n4 00:40:09.526 Could not set queue depth (nvme0n1) 00:40:09.526 Could not set queue depth (nvme0n2) 00:40:09.526 Could not set queue depth (nvme0n3) 00:40:09.526 Could not set queue depth (nvme0n4) 00:40:09.784 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:09.784 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:09.784 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:09.784 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:09.784 fio-3.35 00:40:09.784 Starting 4 threads 00:40:13.066 10:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:13.066 10:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:13.066 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=311296, buflen=4096 00:40:13.066 fio: pid=1541047, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:13.066 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:40:13.066 fio: pid=1541046, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:13.066 10:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:13.066 10:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:13.066 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5894144, buflen=4096 00:40:13.066 fio: pid=1541044, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:13.066 10:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:13.066 10:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:13.325 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=13840384, buflen=4096 00:40:13.325 fio: pid=1541045, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:13.325 10:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:13.325 10:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:13.325 00:40:13.325 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1541044: Sat Dec 7 10:14:41 2024 00:40:13.325 read: IOPS=468, BW=1874KiB/s (1919kB/s)(5756KiB/3071msec) 00:40:13.325 slat (usec): min=6, max=32862, avg=31.63, stdev=865.79 00:40:13.325 clat (usec): min=190, max=41999, avg=2084.40, stdev=8465.63 00:40:13.325 lat (usec): min=198, max=42022, avg=2116.05, stdev=8507.89 00:40:13.325 clat percentiles (usec): 00:40:13.325 | 1.00th=[ 200], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 231], 00:40:13.325 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:40:13.325 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 383], 00:40:13.325 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:40:13.325 | 99.99th=[42206] 00:40:13.325 bw ( KiB/s): min= 96, max= 9956, per=28.99%, avg=1752.50, stdev=4018.90, samples=6 00:40:13.325 iops : min= 24, max= 2489, avg=438.00, stdev=1004.79, samples=6 00:40:13.325 lat (usec) : 250=72.29%, 500=23.12% 00:40:13.325 lat (msec) : 50=4.51% 00:40:13.325 cpu : usr=0.23%, sys=0.85%, ctx=1441, majf=0, minf=1 00:40:13.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:13.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.325 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.325 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:13.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:13.325 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1541045: Sat Dec 7 10:14:41 2024 00:40:13.325 read: IOPS=1028, BW=4112KiB/s (4211kB/s)(13.2MiB/3287msec) 00:40:13.325 slat (usec): min=3, max=14486, avg=16.89, stdev=330.20 00:40:13.325 clat (usec): min=209, max=48296, avg=948.09, stdev=5311.21 00:40:13.325 lat (usec): min=217, max=48307, avg=964.99, stdev=5322.50 00:40:13.325 clat percentiles (usec): 00:40:13.325 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:40:13.325 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:40:13.325 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 265], 00:40:13.325 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:40:13.325 | 99.99th=[48497] 00:40:13.325 bw ( KiB/s): min= 96, max=12191, per=55.07%, avg=3328.67, stdev=5230.22, samples=6 00:40:13.325 iops : min= 24, max= 3047, avg=832.00, stdev=1307.26, samples=6 00:40:13.325 lat (usec) : 250=59.23%, 500=38.85%, 750=0.18% 00:40:13.325 lat (msec) : 50=1.72% 00:40:13.325 cpu : usr=0.27%, sys=0.91%, ctx=3383, majf=0, minf=2 00:40:13.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:13.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.325 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.325 issued rwts: total=3380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:13.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:13.325 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1541046: Sat Dec 7 10:14:41 2024 00:40:13.325 read: IOPS=25, BW=100KiB/s (103kB/s)(288KiB/2868msec) 00:40:13.325 slat (usec): min=8, max=12775, avg=197.57, stdev=1492.60 00:40:13.325 clat (usec): min=432, max=42007, avg=39315.27, stdev=8151.44 00:40:13.325 lat (usec): min=455, max=53949, avg=39515.21, stdev=8327.77 00:40:13.325 clat percentiles (usec): 00:40:13.325 | 1.00th=[ 433], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:13.325 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:13.325 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:13.325 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:13.325 | 99.99th=[42206] 00:40:13.325 bw ( KiB/s): min= 96, max= 112, per=1.69%, avg=102.20, stdev= 6.65, samples=5 00:40:13.325 iops : min= 24, max= 28, avg=25.40, stdev= 1.67, samples=5 00:40:13.325 lat (usec) : 500=2.74%, 750=1.37% 00:40:13.325 lat (msec) : 50=94.52% 00:40:13.325 cpu : usr=0.10%, sys=0.00%, ctx=74, majf=0, minf=2 00:40:13.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:13.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.325 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.325 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:13.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:13.325 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1541047: Sat Dec 7 10:14:41 2024 00:40:13.325 read: IOPS=28, BW=113KiB/s (116kB/s)(304KiB/2692msec) 00:40:13.325 slat (nsec): min=8787, max=38948, avg=23540.06, stdev=7030.09 00:40:13.325 clat (usec): min=276, max=41385, avg=35055.43, stdev=14380.58 00:40:13.325 lat (usec): min=313, max=41408, avg=35078.98, stdev=14376.76 00:40:13.325 clat percentiles (usec): 00:40:13.325 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[40633], 00:40:13.325 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:13.325 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:13.325 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:13.325 | 99.99th=[41157] 00:40:13.325 bw ( KiB/s): min= 96, max= 144, per=1.90%, avg=115.00, stdev=18.57, samples=5 00:40:13.325 iops : min= 24, max= 36, avg=28.60, stdev= 4.77, samples=5 00:40:13.325 lat (usec) : 500=12.99%, 750=1.30% 00:40:13.325 lat (msec) : 50=84.42% 00:40:13.325 cpu : usr=0.11%, sys=0.00%, ctx=77, majf=0, minf=2 00:40:13.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:13.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.325 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.325 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:13.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:13.325 00:40:13.325 Run status group 0 (all jobs): 00:40:13.325 READ: bw=6043KiB/s (6188kB/s), 100KiB/s-4112KiB/s (103kB/s-4211kB/s), io=19.4MiB (20.3MB), run=2692-3287msec 00:40:13.325 00:40:13.325 Disk stats (read/write): 00:40:13.325 nvme0n1: ios=1439/0, merge=0/0, ticks=2988/0, in_queue=2988, util=93.22% 00:40:13.325 nvme0n2: ios=2592/0, merge=0/0, ticks=2977/0, in_queue=2977, util=94.63% 00:40:13.325 nvme0n3: ios=71/0, merge=0/0, ticks=2792/0, in_queue=2792, util=95.78% 00:40:13.325 nvme0n4: ios=73/0, merge=0/0, ticks=2543/0, in_queue=2543, util=96.42% 00:40:13.583 10:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:13.583 10:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:13.583 10:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:13.583 10:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:13.840 10:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:13.840 10:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:14.098 10:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:14.098 10:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:14.356 10:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:14.356 10:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1540803 00:40:14.356 10:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:14.356 10:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:14.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:14.356 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:14.356 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:40:14.356 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:40:14.356 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:14.356 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:40:14.356 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:14.356 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:40:14.356 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:14.356 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:14.356 nvmf hotplug test: fio failed as expected 00:40:14.356 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:14.614 rmmod nvme_tcp 00:40:14.614 rmmod nvme_fabrics 00:40:14.614 rmmod nvme_keyring 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 1538217 ']' 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 1538217 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1538217 ']' 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1538217 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:14.614 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1538217 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1538217' 00:40:14.872 killing process with pid 1538217 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1538217 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1538217 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:14.872 10:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:17.400 00:40:17.400 real 0m25.416s 00:40:17.400 user 1m31.099s 00:40:17.400 sys 0m10.590s 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:17.400 ************************************ 00:40:17.400 END TEST nvmf_fio_target 00:40:17.400 ************************************ 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:17.400 ************************************ 00:40:17.400 START TEST nvmf_bdevio 00:40:17.400 ************************************ 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:17.400 * Looking for test storage... 00:40:17.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:17.400 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:17.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.401 --rc genhtml_branch_coverage=1 00:40:17.401 --rc genhtml_function_coverage=1 00:40:17.401 --rc genhtml_legend=1 00:40:17.401 --rc geninfo_all_blocks=1 00:40:17.401 --rc geninfo_unexecuted_blocks=1 00:40:17.401 00:40:17.401 ' 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:17.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.401 --rc genhtml_branch_coverage=1 00:40:17.401 --rc genhtml_function_coverage=1 00:40:17.401 --rc genhtml_legend=1 00:40:17.401 --rc geninfo_all_blocks=1 00:40:17.401 --rc geninfo_unexecuted_blocks=1 00:40:17.401 00:40:17.401 ' 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:17.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.401 --rc genhtml_branch_coverage=1 00:40:17.401 --rc genhtml_function_coverage=1 00:40:17.401 --rc genhtml_legend=1 00:40:17.401 --rc geninfo_all_blocks=1 00:40:17.401 --rc geninfo_unexecuted_blocks=1 00:40:17.401 00:40:17.401 ' 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:17.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.401 --rc genhtml_branch_coverage=1 00:40:17.401 --rc genhtml_function_coverage=1 00:40:17.401 --rc genhtml_legend=1 00:40:17.401 --rc geninfo_all_blocks=1 00:40:17.401 --rc geninfo_unexecuted_blocks=1 00:40:17.401 00:40:17.401 ' 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:17.401 10:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:22.660 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:40:22.661 Found 0000:86:00.0 (0x8086 - 0x159b) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:40:22.661 Found 0000:86:00.1 (0x8086 - 0x159b) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:40:22.661 Found net devices under 0000:86:00.0: cvl_0_0 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:40:22.661 Found net devices under 0000:86:00.1: cvl_0_1 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:22.661 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:22.920 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:22.920 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:22.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:22.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:40:22.921 00:40:22.921 --- 10.0.0.2 ping statistics --- 00:40:22.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.921 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:22.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:22.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:40:22.921 00:40:22.921 --- 10.0.0.1 ping statistics --- 00:40:22.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.921 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=1545275 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 1545275 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1545275 ']' 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:22.921 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:22.921 [2024-12-07 10:14:51.529794] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:22.921 [2024-12-07 10:14:51.530751] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:40:22.921 [2024-12-07 10:14:51.530786] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:22.921 [2024-12-07 10:14:51.589341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:22.921 [2024-12-07 10:14:51.631679] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:22.921 [2024-12-07 10:14:51.631722] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:22.921 [2024-12-07 10:14:51.631730] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:22.921 [2024-12-07 10:14:51.631736] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:22.921 [2024-12-07 10:14:51.631741] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:22.921 [2024-12-07 10:14:51.631882] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:40:22.921 [2024-12-07 10:14:51.631991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:40:22.921 [2024-12-07 10:14:51.632098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:22.921 [2024-12-07 10:14:51.632100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:40:23.179 [2024-12-07 10:14:51.710771] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:23.179 [2024-12-07 10:14:51.711424] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:23.179 [2024-12-07 10:14:51.711957] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:23.179 [2024-12-07 10:14:51.712200] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:23.179 [2024-12-07 10:14:51.712219] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:23.179 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:23.179 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:40:23.179 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:23.179 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:23.179 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:23.179 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:23.179 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:23.179 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:23.179 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:23.180 [2024-12-07 10:14:51.772613] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:23.180 Malloc0 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:23.180 [2024-12-07 10:14:51.836841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:23.180 { 00:40:23.180 "params": { 00:40:23.180 "name": "Nvme$subsystem", 00:40:23.180 "trtype": "$TEST_TRANSPORT", 00:40:23.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:23.180 "adrfam": "ipv4", 00:40:23.180 "trsvcid": "$NVMF_PORT", 00:40:23.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:23.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:23.180 "hdgst": ${hdgst:-false}, 00:40:23.180 "ddgst": ${ddgst:-false} 00:40:23.180 }, 00:40:23.180 "method": "bdev_nvme_attach_controller" 00:40:23.180 } 00:40:23.180 EOF 00:40:23.180 )") 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:40:23.180 10:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:23.180 "params": { 00:40:23.180 "name": "Nvme1", 00:40:23.180 "trtype": "tcp", 00:40:23.180 "traddr": "10.0.0.2", 00:40:23.180 "adrfam": "ipv4", 00:40:23.180 "trsvcid": "4420", 00:40:23.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:23.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:23.180 "hdgst": false, 00:40:23.180 "ddgst": false 00:40:23.180 }, 00:40:23.180 "method": "bdev_nvme_attach_controller" 00:40:23.180 }' 00:40:23.180 [2024-12-07 10:14:51.885523] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:40:23.180 [2024-12-07 10:14:51.885571] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545308 ] 00:40:23.438 [2024-12-07 10:14:51.940648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:23.438 [2024-12-07 10:14:51.983502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:23.438 [2024-12-07 10:14:51.983599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:23.438 [2024-12-07 10:14:51.983601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.438 I/O targets: 00:40:23.438 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:23.438 00:40:23.438 00:40:23.438 CUnit - A unit testing framework for C - Version 2.1-3 00:40:23.438 http://cunit.sourceforge.net/ 00:40:23.438 00:40:23.438 00:40:23.438 Suite: bdevio tests on: Nvme1n1 00:40:23.695 Test: blockdev write read block ...passed 00:40:23.695 Test: blockdev write zeroes read block ...passed 00:40:23.695 Test: blockdev write zeroes read no split ...passed 00:40:23.695 Test: blockdev write zeroes read split ...passed 00:40:23.695 Test: blockdev write zeroes read split partial ...passed 00:40:23.695 Test: blockdev reset ...[2024-12-07 10:14:52.277703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:23.695 [2024-12-07 10:14:52.277765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fff4a0 (9): Bad file descriptor 00:40:23.695 [2024-12-07 10:14:52.281779] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:23.695 passed 00:40:23.695 Test: blockdev write read 8 blocks ...passed 00:40:23.695 Test: blockdev write read size > 128k ...passed 00:40:23.695 Test: blockdev write read invalid size ...passed 00:40:23.695 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:23.695 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:23.695 Test: blockdev write read max offset ...passed 00:40:23.695 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:23.954 Test: blockdev writev readv 8 blocks ...passed 00:40:23.954 Test: blockdev writev readv 30 x 1block ...passed 00:40:23.954 Test: blockdev writev readv block ...passed 00:40:23.954 Test: blockdev writev readv size > 128k ...passed 00:40:23.954 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:23.954 Test: blockdev comparev and writev ...[2024-12-07 10:14:52.494119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:23.954 [2024-12-07 10:14:52.494152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:23.954 [2024-12-07 10:14:52.494166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:23.954 [2024-12-07 10:14:52.494174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:23.954 [2024-12-07 10:14:52.494487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:23.954 [2024-12-07 10:14:52.494497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:23.954 [2024-12-07 10:14:52.494509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:23.954 [2024-12-07 10:14:52.494516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:23.954 [2024-12-07 10:14:52.494829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:23.954 [2024-12-07 10:14:52.494840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:23.954 [2024-12-07 10:14:52.494851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:23.954 [2024-12-07 10:14:52.494859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:23.954 [2024-12-07 10:14:52.495175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:23.954 [2024-12-07 10:14:52.495186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:23.954 [2024-12-07 10:14:52.495198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:23.954 [2024-12-07 10:14:52.495206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:23.954 passed 00:40:23.954 Test: blockdev nvme passthru rw ...passed 00:40:23.954 Test: blockdev nvme passthru vendor specific ...[2024-12-07 10:14:52.577365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:23.954 [2024-12-07 10:14:52.577382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:23.954 [2024-12-07 10:14:52.577513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:23.954 [2024-12-07 10:14:52.577523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:23.954 [2024-12-07 10:14:52.577650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:23.954 [2024-12-07 10:14:52.577659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:23.954 [2024-12-07 10:14:52.577786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:23.954 [2024-12-07 10:14:52.577796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:23.954 passed 00:40:23.954 Test: blockdev nvme admin passthru ...passed 00:40:23.954 Test: blockdev copy ...passed 00:40:23.954 00:40:23.954 Run Summary: Type Total Ran Passed Failed Inactive 00:40:23.954 suites 1 1 n/a 0 0 00:40:23.954 tests 23 23 23 0 0 00:40:23.954 asserts 152 152 152 0 n/a 00:40:23.954 00:40:23.954 Elapsed time = 1.000 seconds 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:24.213 rmmod nvme_tcp 00:40:24.213 rmmod nvme_fabrics 00:40:24.213 rmmod nvme_keyring 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 1545275 ']' 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 1545275 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1545275 ']' 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1545275 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1545275 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1545275' 00:40:24.213 killing process with pid 1545275 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1545275 00:40:24.213 10:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1545275 00:40:24.472 10:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:24.472 10:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:24.472 10:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:24.472 10:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:24.472 10:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:24.472 10:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:40:24.472 10:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:40:24.472 10:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:24.472 10:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:24.472 10:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:24.472 10:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:24.472 10:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.008 10:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:27.008 00:40:27.008 real 0m9.500s 00:40:27.008 user 0m7.975s 00:40:27.008 sys 0m4.920s 00:40:27.008 10:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:27.008 10:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:27.008 ************************************ 00:40:27.008 END TEST nvmf_bdevio 00:40:27.008 ************************************ 00:40:27.008 10:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:27.008 00:40:27.009 real 4m24.177s 00:40:27.009 user 8m59.133s 00:40:27.009 sys 1m47.973s 00:40:27.009 10:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:27.009 10:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:27.009 ************************************ 00:40:27.009 END TEST nvmf_target_core_interrupt_mode 00:40:27.009 ************************************ 00:40:27.009 10:14:55 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:27.009 10:14:55 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:27.009 10:14:55 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:27.009 10:14:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.009 ************************************ 00:40:27.009 START TEST nvmf_interrupt 00:40:27.009 ************************************ 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:27.009 * Looking for test storage... 00:40:27.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:27.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.009 --rc genhtml_branch_coverage=1 00:40:27.009 --rc genhtml_function_coverage=1 00:40:27.009 --rc genhtml_legend=1 00:40:27.009 --rc geninfo_all_blocks=1 00:40:27.009 --rc geninfo_unexecuted_blocks=1 00:40:27.009 00:40:27.009 ' 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:27.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.009 --rc genhtml_branch_coverage=1 00:40:27.009 --rc genhtml_function_coverage=1 00:40:27.009 --rc genhtml_legend=1 00:40:27.009 --rc geninfo_all_blocks=1 00:40:27.009 --rc geninfo_unexecuted_blocks=1 00:40:27.009 00:40:27.009 ' 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:27.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.009 --rc genhtml_branch_coverage=1 00:40:27.009 --rc genhtml_function_coverage=1 00:40:27.009 --rc genhtml_legend=1 00:40:27.009 --rc geninfo_all_blocks=1 00:40:27.009 --rc geninfo_unexecuted_blocks=1 00:40:27.009 00:40:27.009 ' 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:27.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.009 --rc genhtml_branch_coverage=1 00:40:27.009 --rc genhtml_function_coverage=1 00:40:27.009 --rc genhtml_legend=1 00:40:27.009 --rc geninfo_all_blocks=1 00:40:27.009 --rc geninfo_unexecuted_blocks=1 00:40:27.009 00:40:27.009 ' 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:27.009 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:27.010 10:14:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:40:32.278 Found 0000:86:00.0 (0x8086 - 0x159b) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:40:32.278 Found 0000:86:00.1 (0x8086 - 0x159b) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:32.278 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:40:32.279 Found net devices under 0000:86:00.0: cvl_0_0 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:40:32.279 Found net devices under 0000:86:00.1: cvl_0_1 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:32.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:32.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:40:32.279 00:40:32.279 --- 10.0.0.2 ping statistics --- 00:40:32.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.279 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:32.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:32.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:40:32.279 00:40:32.279 --- 10.0.0.1 ping statistics --- 00:40:32.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.279 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=1548889 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 1548889 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1548889 ']' 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:32.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:32.279 [2024-12-07 10:15:00.700704] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:32.279 [2024-12-07 10:15:00.701629] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:40:32.279 [2024-12-07 10:15:00.701664] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:32.279 [2024-12-07 10:15:00.760917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:32.279 [2024-12-07 10:15:00.801514] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:32.279 [2024-12-07 10:15:00.801557] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:32.279 [2024-12-07 10:15:00.801565] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:32.279 [2024-12-07 10:15:00.801572] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:32.279 [2024-12-07 10:15:00.801577] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:32.279 [2024-12-07 10:15:00.801615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:32.279 [2024-12-07 10:15:00.801619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:32.279 [2024-12-07 10:15:00.863511] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:32.279 [2024-12-07 10:15:00.863659] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:32.279 [2024-12-07 10:15:00.863743] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:32.279 5000+0 records in 00:40:32.279 5000+0 records out 00:40:32.279 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0180827 s, 566 MB/s 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:32.279 AIO0 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.279 10:15:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:32.538 [2024-12-07 10:15:01.002347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:32.538 10:15:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.538 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:32.538 10:15:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.538 10:15:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:32.538 10:15:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.538 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:32.538 10:15:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.538 10:15:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:32.538 10:15:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.538 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:32.538 10:15:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:32.539 [2024-12-07 10:15:01.046566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1548889 0 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1548889 0 idle 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548889 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548889 -w 256 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548889 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.21 reactor_0' 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548889 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.21 reactor_0 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1548889 1 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1548889 1 idle 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548889 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548889 -w 256 00:40:32.539 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:32.797 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548893 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:40:32.797 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548893 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:40:32.797 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1549119 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1548889 0 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1548889 0 busy 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548889 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548889 -w 256 00:40:32.798 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548889 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.36 reactor_0' 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548889 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.36 reactor_0 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1548889 1 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1548889 1 busy 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548889 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548889 -w 256 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:33.056 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548893 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.25 reactor_1' 00:40:33.312 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548893 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.25 reactor_1 00:40:33.312 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:33.312 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:33.312 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:33.312 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:33.312 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:33.312 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:33.312 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:33.312 10:15:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:33.312 10:15:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1549119 00:40:43.354 Initializing NVMe Controllers 00:40:43.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:43.354 Controller IO queue size 256, less than required. 00:40:43.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:43.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:43.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:43.354 Initialization complete. Launching workers. 00:40:43.354 ======================================================== 00:40:43.354 Latency(us) 00:40:43.354 Device Information : IOPS MiB/s Average min max 00:40:43.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15652.83 61.14 16364.38 2754.24 20074.18 00:40:43.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 15973.23 62.40 16035.14 4196.05 20014.98 00:40:43.354 ======================================================== 00:40:43.354 Total : 31626.06 123.54 16198.09 2754.24 20074.18 00:40:43.354 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1548889 0 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1548889 0 idle 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548889 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548889 -w 256 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548889 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.20 reactor_0' 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548889 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.20 reactor_0 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1548889 1 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1548889 1 idle 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548889 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548889 -w 256 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548893 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548893 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:43.354 10:15:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:43.665 10:15:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:43.665 10:15:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:40:43.665 10:15:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:40:43.665 10:15:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:40:43.665 10:15:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1548889 0 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1548889 0 idle 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548889 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548889 -w 256 00:40:45.633 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548889 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:20.36 reactor_0' 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548889 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:20.36 reactor_0 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1548889 1 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1548889 1 idle 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548889 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:45.892 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:45.893 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:45.893 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548889 -w 256 00:40:45.893 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548893 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.05 reactor_1' 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548893 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.05 reactor_1 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:46.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:46.152 10:15:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:46.152 rmmod nvme_tcp 00:40:46.152 rmmod nvme_fabrics 00:40:46.411 rmmod nvme_keyring 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 1548889 ']' 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 1548889 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1548889 ']' 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1548889 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1548889 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1548889' 00:40:46.411 killing process with pid 1548889 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1548889 00:40:46.411 10:15:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1548889 00:40:46.670 10:15:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:46.670 10:15:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:46.670 10:15:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:46.670 10:15:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:46.670 10:15:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:40:46.670 10:15:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:46.670 10:15:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:40:46.670 10:15:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:46.670 10:15:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:46.670 10:15:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:46.670 10:15:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:46.670 10:15:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:48.573 10:15:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:48.573 00:40:48.573 real 0m21.967s 00:40:48.573 user 0m39.039s 00:40:48.573 sys 0m8.083s 00:40:48.573 10:15:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:48.573 10:15:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:48.573 ************************************ 00:40:48.573 END TEST nvmf_interrupt 00:40:48.573 ************************************ 00:40:48.832 00:40:48.832 real 34m34.633s 00:40:48.832 user 85m28.694s 00:40:48.832 sys 9m56.625s 00:40:48.832 10:15:17 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:48.832 10:15:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:48.832 ************************************ 00:40:48.832 END TEST nvmf_tcp 00:40:48.832 ************************************ 00:40:48.832 10:15:17 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:40:48.832 10:15:17 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:48.832 10:15:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:48.832 10:15:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:48.832 10:15:17 -- common/autotest_common.sh@10 -- # set +x 00:40:48.832 ************************************ 00:40:48.832 START TEST spdkcli_nvmf_tcp 00:40:48.832 ************************************ 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:48.832 * Looking for test storage... 00:40:48.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:48.832 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:49.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:49.091 --rc genhtml_branch_coverage=1 00:40:49.091 --rc genhtml_function_coverage=1 00:40:49.091 --rc genhtml_legend=1 00:40:49.091 --rc geninfo_all_blocks=1 00:40:49.091 --rc geninfo_unexecuted_blocks=1 00:40:49.091 00:40:49.091 ' 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:49.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:49.091 --rc genhtml_branch_coverage=1 00:40:49.091 --rc genhtml_function_coverage=1 00:40:49.091 --rc genhtml_legend=1 00:40:49.091 --rc geninfo_all_blocks=1 00:40:49.091 --rc geninfo_unexecuted_blocks=1 00:40:49.091 00:40:49.091 ' 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:49.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:49.091 --rc genhtml_branch_coverage=1 00:40:49.091 --rc genhtml_function_coverage=1 00:40:49.091 --rc genhtml_legend=1 00:40:49.091 --rc geninfo_all_blocks=1 00:40:49.091 --rc geninfo_unexecuted_blocks=1 00:40:49.091 00:40:49.091 ' 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:49.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:49.091 --rc genhtml_branch_coverage=1 00:40:49.091 --rc genhtml_function_coverage=1 00:40:49.091 --rc genhtml_legend=1 00:40:49.091 --rc geninfo_all_blocks=1 00:40:49.091 --rc geninfo_unexecuted_blocks=1 00:40:49.091 00:40:49.091 ' 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:49.091 10:15:17 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:49.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1552266 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1552266 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1552266 ']' 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:49.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:49.092 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:49.092 [2024-12-07 10:15:17.641128] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:40:49.092 [2024-12-07 10:15:17.641182] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1552266 ] 00:40:49.092 [2024-12-07 10:15:17.697533] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:49.092 [2024-12-07 10:15:17.739039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:49.092 [2024-12-07 10:15:17.739042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:49.350 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:49.350 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:40:49.350 10:15:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:49.350 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:49.350 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:49.350 10:15:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:49.350 10:15:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:49.350 10:15:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:49.350 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:49.350 10:15:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:49.350 10:15:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:49.350 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:49.350 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:49.350 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:49.350 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:49.350 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:49.350 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:49.350 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:49.350 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:49.350 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:49.350 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:49.350 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:49.350 ' 00:40:51.876 [2024-12-07 10:15:20.347598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:53.256 [2024-12-07 10:15:21.571798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:55.160 [2024-12-07 10:15:23.818709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:57.063 [2024-12-07 10:15:25.752723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:58.958 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:58.958 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:58.958 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:58.958 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:58.958 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:58.958 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:58.958 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:58.958 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:58.958 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:58.958 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:58.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:58.958 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:58.958 10:15:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:58.958 10:15:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:58.958 10:15:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:58.958 10:15:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:58.958 10:15:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:58.958 10:15:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:58.958 10:15:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:58.958 10:15:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:59.216 10:15:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:59.216 10:15:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:59.216 10:15:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:59.216 10:15:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:59.216 10:15:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:59.216 10:15:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:59.216 10:15:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:59.216 10:15:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:59.216 10:15:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:59.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:59.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:59.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:59.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:59.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:59.216 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:59.216 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:59.216 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:59.216 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:59.216 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:59.216 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:59.216 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:59.216 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:59.216 ' 00:41:04.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:04.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:04.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:04.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:04.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:04.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:04.479 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:04.479 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:04.479 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:04.479 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:04.479 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:04.479 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:04.479 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:04.479 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:04.479 10:15:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:04.479 10:15:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:04.479 10:15:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:04.479 10:15:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1552266 00:41:04.479 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1552266 ']' 00:41:04.479 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1552266 00:41:04.479 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:41:04.479 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:04.479 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1552266 00:41:04.479 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:04.479 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:04.479 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1552266' 00:41:04.479 killing process with pid 1552266 00:41:04.479 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1552266 00:41:04.479 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1552266 00:41:04.739 10:15:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:04.739 10:15:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:04.739 10:15:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1552266 ']' 00:41:04.739 10:15:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1552266 00:41:04.739 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1552266 ']' 00:41:04.739 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1552266 00:41:04.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1552266) - No such process 00:41:04.739 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1552266 is not found' 00:41:04.739 Process with pid 1552266 is not found 00:41:04.739 10:15:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:04.739 10:15:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:04.739 10:15:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:04.739 00:41:04.739 real 0m15.879s 00:41:04.739 user 0m33.031s 00:41:04.739 sys 0m0.723s 00:41:04.739 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:04.739 10:15:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:04.739 ************************************ 00:41:04.739 END TEST spdkcli_nvmf_tcp 00:41:04.739 ************************************ 00:41:04.739 10:15:33 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:04.739 10:15:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:04.739 10:15:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:04.739 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:41:04.739 ************************************ 00:41:04.739 START TEST nvmf_identify_passthru 00:41:04.739 ************************************ 00:41:04.739 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:04.739 * Looking for test storage... 00:41:04.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:04.739 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:04.739 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:41:04.739 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:04.999 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:41:04.999 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:04.999 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:04.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.999 --rc genhtml_branch_coverage=1 00:41:04.999 --rc genhtml_function_coverage=1 00:41:04.999 --rc genhtml_legend=1 00:41:04.999 --rc geninfo_all_blocks=1 00:41:04.999 --rc geninfo_unexecuted_blocks=1 00:41:04.999 00:41:04.999 ' 00:41:04.999 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:04.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.999 --rc genhtml_branch_coverage=1 00:41:04.999 --rc genhtml_function_coverage=1 00:41:04.999 --rc genhtml_legend=1 00:41:04.999 --rc geninfo_all_blocks=1 00:41:04.999 --rc geninfo_unexecuted_blocks=1 00:41:04.999 00:41:04.999 ' 00:41:04.999 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:04.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.999 --rc genhtml_branch_coverage=1 00:41:04.999 --rc genhtml_function_coverage=1 00:41:04.999 --rc genhtml_legend=1 00:41:04.999 --rc geninfo_all_blocks=1 00:41:04.999 --rc geninfo_unexecuted_blocks=1 00:41:04.999 00:41:04.999 ' 00:41:04.999 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:04.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.999 --rc genhtml_branch_coverage=1 00:41:04.999 --rc genhtml_function_coverage=1 00:41:04.999 --rc genhtml_legend=1 00:41:04.999 --rc geninfo_all_blocks=1 00:41:04.999 --rc geninfo_unexecuted_blocks=1 00:41:04.999 00:41:04.999 ' 00:41:04.999 10:15:33 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:04.999 10:15:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.999 10:15:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.999 10:15:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.999 10:15:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:04.999 10:15:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:04.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:04.999 10:15:33 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:04.999 10:15:33 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:04.999 10:15:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.999 10:15:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.999 10:15:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.999 10:15:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:04.999 10:15:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.999 10:15:33 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:04.999 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:04.999 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:04.999 10:15:33 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:41:04.999 10:15:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:41:10.272 Found 0000:86:00.0 (0x8086 - 0x159b) 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:10.272 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:41:10.273 Found 0000:86:00.1 (0x8086 - 0x159b) 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:41:10.273 Found net devices under 0000:86:00.0: cvl_0_0 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:41:10.273 Found net devices under 0000:86:00.1: cvl_0_1 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:10.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:10.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:41:10.273 00:41:10.273 --- 10.0.0.2 ping statistics --- 00:41:10.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:10.273 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:10.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:10.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:41:10.273 00:41:10.273 --- 10.0.0.1 ping statistics --- 00:41:10.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:10.273 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:10.273 10:15:38 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:10.273 10:15:38 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:10.273 10:15:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:41:10.273 10:15:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:41:10.273 10:15:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:41:10.273 10:15:38 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:41:10.273 10:15:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:10.273 10:15:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:10.273 10:15:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:14.464 10:15:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:41:14.464 10:15:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:14.464 10:15:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:14.464 10:15:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:18.651 10:15:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:41:18.651 10:15:47 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:18.651 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:18.651 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:18.651 10:15:47 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:18.651 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:18.651 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:18.651 10:15:47 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1559114 00:41:18.651 10:15:47 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:18.651 10:15:47 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1559114 00:41:18.651 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1559114 ']' 00:41:18.651 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:18.652 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:18.652 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:18.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:18.652 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:18.652 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:18.652 10:15:47 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:18.652 [2024-12-07 10:15:47.225530] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:41:18.652 [2024-12-07 10:15:47.225578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:18.652 [2024-12-07 10:15:47.284372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:18.652 [2024-12-07 10:15:47.327483] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:18.652 [2024-12-07 10:15:47.327522] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:18.652 [2024-12-07 10:15:47.327530] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:18.652 [2024-12-07 10:15:47.327536] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:18.652 [2024-12-07 10:15:47.327541] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:18.652 [2024-12-07 10:15:47.327579] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:18.652 [2024-12-07 10:15:47.327677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:18.652 [2024-12-07 10:15:47.327752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:18.652 [2024-12-07 10:15:47.327753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:41:18.911 10:15:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:18.911 INFO: Log level set to 20 00:41:18.911 INFO: Requests: 00:41:18.911 { 00:41:18.911 "jsonrpc": "2.0", 00:41:18.911 "method": "nvmf_set_config", 00:41:18.911 "id": 1, 00:41:18.911 "params": { 00:41:18.911 "admin_cmd_passthru": { 00:41:18.911 "identify_ctrlr": true 00:41:18.911 } 00:41:18.911 } 00:41:18.911 } 00:41:18.911 00:41:18.911 INFO: response: 00:41:18.911 { 00:41:18.911 "jsonrpc": "2.0", 00:41:18.911 "id": 1, 00:41:18.911 "result": true 00:41:18.911 } 00:41:18.911 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.911 10:15:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:18.911 INFO: Setting log level to 20 00:41:18.911 INFO: Setting log level to 20 00:41:18.911 INFO: Log level set to 20 00:41:18.911 INFO: Log level set to 20 00:41:18.911 INFO: Requests: 00:41:18.911 { 00:41:18.911 "jsonrpc": "2.0", 00:41:18.911 "method": "framework_start_init", 00:41:18.911 "id": 1 00:41:18.911 } 00:41:18.911 00:41:18.911 INFO: Requests: 00:41:18.911 { 00:41:18.911 "jsonrpc": "2.0", 00:41:18.911 "method": "framework_start_init", 00:41:18.911 "id": 1 00:41:18.911 } 00:41:18.911 00:41:18.911 [2024-12-07 10:15:47.461527] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:18.911 INFO: response: 00:41:18.911 { 00:41:18.911 "jsonrpc": "2.0", 00:41:18.911 "id": 1, 00:41:18.911 "result": true 00:41:18.911 } 00:41:18.911 00:41:18.911 INFO: response: 00:41:18.911 { 00:41:18.911 "jsonrpc": "2.0", 00:41:18.911 "id": 1, 00:41:18.911 "result": true 00:41:18.911 } 00:41:18.911 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.911 10:15:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:18.911 INFO: Setting log level to 40 00:41:18.911 INFO: Setting log level to 40 00:41:18.911 INFO: Setting log level to 40 00:41:18.911 [2024-12-07 10:15:47.475030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.911 10:15:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:18.911 10:15:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.911 10:15:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:22.198 Nvme0n1 00:41:22.198 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.198 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:22.198 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.198 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:22.198 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.198 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:22.198 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.198 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:22.198 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.198 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:22.198 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.198 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:22.198 [2024-12-07 10:15:50.373723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:22.199 [ 00:41:22.199 { 00:41:22.199 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:22.199 "subtype": "Discovery", 00:41:22.199 "listen_addresses": [], 00:41:22.199 "allow_any_host": true, 00:41:22.199 "hosts": [] 00:41:22.199 }, 00:41:22.199 { 00:41:22.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:22.199 "subtype": "NVMe", 00:41:22.199 "listen_addresses": [ 00:41:22.199 { 00:41:22.199 "trtype": "TCP", 00:41:22.199 "adrfam": "IPv4", 00:41:22.199 "traddr": "10.0.0.2", 00:41:22.199 "trsvcid": "4420" 00:41:22.199 } 00:41:22.199 ], 00:41:22.199 "allow_any_host": true, 00:41:22.199 "hosts": [], 00:41:22.199 "serial_number": "SPDK00000000000001", 00:41:22.199 "model_number": "SPDK bdev Controller", 00:41:22.199 "max_namespaces": 1, 00:41:22.199 "min_cntlid": 1, 00:41:22.199 "max_cntlid": 65519, 00:41:22.199 "namespaces": [ 00:41:22.199 { 00:41:22.199 "nsid": 1, 00:41:22.199 "bdev_name": "Nvme0n1", 00:41:22.199 "name": "Nvme0n1", 00:41:22.199 "nguid": "9B53261AE3984AFFA5AB005AB366148F", 00:41:22.199 "uuid": "9b53261a-e398-4aff-a5ab-005ab366148f" 00:41:22.199 } 00:41:22.199 ] 00:41:22.199 } 00:41:22.199 ] 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:22.199 10:15:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:22.199 10:15:50 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:22.199 10:15:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:22.199 10:15:50 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:22.199 10:15:50 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:22.199 10:15:50 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:22.199 10:15:50 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:22.199 rmmod nvme_tcp 00:41:22.199 rmmod nvme_fabrics 00:41:22.199 rmmod nvme_keyring 00:41:22.199 10:15:50 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:22.199 10:15:50 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:22.199 10:15:50 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:22.199 10:15:50 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 1559114 ']' 00:41:22.199 10:15:50 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 1559114 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1559114 ']' 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1559114 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1559114 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1559114' 00:41:22.199 killing process with pid 1559114 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1559114 00:41:22.199 10:15:50 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1559114 00:41:24.102 10:15:52 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:24.102 10:15:52 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:24.102 10:15:52 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:24.102 10:15:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:24.102 10:15:52 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:41:24.102 10:15:52 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:24.102 10:15:52 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:41:24.102 10:15:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:24.102 10:15:52 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:24.102 10:15:52 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:24.102 10:15:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:24.102 10:15:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:26.007 10:15:54 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:26.007 00:41:26.007 real 0m21.113s 00:41:26.007 user 0m27.341s 00:41:26.007 sys 0m4.810s 00:41:26.007 10:15:54 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:26.007 10:15:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:26.007 ************************************ 00:41:26.007 END TEST nvmf_identify_passthru 00:41:26.007 ************************************ 00:41:26.007 10:15:54 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:26.008 10:15:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:26.008 10:15:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:26.008 10:15:54 -- common/autotest_common.sh@10 -- # set +x 00:41:26.008 ************************************ 00:41:26.008 START TEST nvmf_dif 00:41:26.008 ************************************ 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:26.008 * Looking for test storage... 00:41:26.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:26.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.008 --rc genhtml_branch_coverage=1 00:41:26.008 --rc genhtml_function_coverage=1 00:41:26.008 --rc genhtml_legend=1 00:41:26.008 --rc geninfo_all_blocks=1 00:41:26.008 --rc geninfo_unexecuted_blocks=1 00:41:26.008 00:41:26.008 ' 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:26.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.008 --rc genhtml_branch_coverage=1 00:41:26.008 --rc genhtml_function_coverage=1 00:41:26.008 --rc genhtml_legend=1 00:41:26.008 --rc geninfo_all_blocks=1 00:41:26.008 --rc geninfo_unexecuted_blocks=1 00:41:26.008 00:41:26.008 ' 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:26.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.008 --rc genhtml_branch_coverage=1 00:41:26.008 --rc genhtml_function_coverage=1 00:41:26.008 --rc genhtml_legend=1 00:41:26.008 --rc geninfo_all_blocks=1 00:41:26.008 --rc geninfo_unexecuted_blocks=1 00:41:26.008 00:41:26.008 ' 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:26.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.008 --rc genhtml_branch_coverage=1 00:41:26.008 --rc genhtml_function_coverage=1 00:41:26.008 --rc genhtml_legend=1 00:41:26.008 --rc geninfo_all_blocks=1 00:41:26.008 --rc geninfo_unexecuted_blocks=1 00:41:26.008 00:41:26.008 ' 00:41:26.008 10:15:54 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:26.008 10:15:54 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:26.008 10:15:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.008 10:15:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.008 10:15:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.008 10:15:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:26.008 10:15:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:26.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:26.008 10:15:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:26.008 10:15:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:26.008 10:15:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:26.008 10:15:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:26.008 10:15:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:26.008 10:15:54 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:26.008 10:15:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:41:31.281 Found 0000:86:00.0 (0x8086 - 0x159b) 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:31.281 10:15:59 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:41:31.281 Found 0000:86:00.1 (0x8086 - 0x159b) 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:31.281 10:16:00 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:41:31.540 Found net devices under 0000:86:00.0: cvl_0_0 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:41:31.540 Found net devices under 0000:86:00.1: cvl_0_1 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:31.540 10:16:00 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:31.541 10:16:00 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:31.541 10:16:00 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:31.541 10:16:00 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:31.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:31.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:41:31.541 00:41:31.541 --- 10.0.0.2 ping statistics --- 00:41:31.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:31.541 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:41:31.541 10:16:00 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:31.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:31.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:41:31.541 00:41:31.541 --- 10.0.0.1 ping statistics --- 00:41:31.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:31.541 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:41:31.541 10:16:00 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:31.541 10:16:00 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:41:31.541 10:16:00 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:41:31.541 10:16:00 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:34.074 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:41:34.074 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:34.332 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:41:34.332 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:41:34.332 10:16:02 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:34.332 10:16:02 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:34.332 10:16:02 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:34.332 10:16:02 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:34.332 10:16:02 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:34.332 10:16:02 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:34.332 10:16:03 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:34.332 10:16:03 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:34.332 10:16:03 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:34.332 10:16:03 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:34.332 10:16:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:34.332 10:16:03 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=1564568 00:41:34.332 10:16:03 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 1564568 00:41:34.332 10:16:03 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:34.332 10:16:03 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1564568 ']' 00:41:34.332 10:16:03 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:34.332 10:16:03 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:34.332 10:16:03 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:34.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:34.332 10:16:03 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:34.332 10:16:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:34.591 [2024-12-07 10:16:03.070088] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:41:34.591 [2024-12-07 10:16:03.070144] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:34.591 [2024-12-07 10:16:03.130322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:34.591 [2024-12-07 10:16:03.170837] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:34.591 [2024-12-07 10:16:03.170876] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:34.591 [2024-12-07 10:16:03.170884] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:34.591 [2024-12-07 10:16:03.170890] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:34.591 [2024-12-07 10:16:03.170895] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:34.591 [2024-12-07 10:16:03.170914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:34.591 10:16:03 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:34.591 10:16:03 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:41:34.591 10:16:03 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:34.591 10:16:03 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:34.591 10:16:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:34.591 10:16:03 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:34.591 10:16:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:34.591 10:16:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:34.591 10:16:03 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.591 10:16:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:34.591 [2024-12-07 10:16:03.301450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:34.591 10:16:03 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.591 10:16:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:34.591 10:16:03 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:34.591 10:16:03 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:34.591 10:16:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:34.849 ************************************ 00:41:34.849 START TEST fio_dif_1_default 00:41:34.849 ************************************ 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:34.849 bdev_null0 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:34.849 [2024-12-07 10:16:03.365741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:34.849 10:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:34.850 { 00:41:34.850 "params": { 00:41:34.850 "name": "Nvme$subsystem", 00:41:34.850 "trtype": "$TEST_TRANSPORT", 00:41:34.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:34.850 "adrfam": "ipv4", 00:41:34.850 "trsvcid": "$NVMF_PORT", 00:41:34.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:34.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:34.850 "hdgst": ${hdgst:-false}, 00:41:34.850 "ddgst": ${ddgst:-false} 00:41:34.850 }, 00:41:34.850 "method": "bdev_nvme_attach_controller" 00:41:34.850 } 00:41:34.850 EOF 00:41:34.850 )") 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:34.850 "params": { 00:41:34.850 "name": "Nvme0", 00:41:34.850 "trtype": "tcp", 00:41:34.850 "traddr": "10.0.0.2", 00:41:34.850 "adrfam": "ipv4", 00:41:34.850 "trsvcid": "4420", 00:41:34.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:34.850 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:34.850 "hdgst": false, 00:41:34.850 "ddgst": false 00:41:34.850 }, 00:41:34.850 "method": "bdev_nvme_attach_controller" 00:41:34.850 }' 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:34.850 10:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:35.108 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:35.108 fio-3.35 00:41:35.108 Starting 1 thread 00:41:47.313 00:41:47.313 filename0: (groupid=0, jobs=1): err= 0: pid=1564727: Sat Dec 7 10:16:14 2024 00:41:47.313 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10001msec) 00:41:47.313 slat (nsec): min=5659, max=25769, avg=6196.37, stdev=711.78 00:41:47.313 clat (usec): min=448, max=44906, avg=21037.59, stdev=20509.92 00:41:47.313 lat (usec): min=454, max=44931, avg=21043.78, stdev=20509.91 00:41:47.313 clat percentiles (usec): 00:41:47.313 | 1.00th=[ 461], 5.00th=[ 465], 10.00th=[ 469], 20.00th=[ 478], 00:41:47.313 | 30.00th=[ 482], 40.00th=[ 529], 50.00th=[41157], 60.00th=[41681], 00:41:47.313 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:41:47.313 | 99.00th=[41681], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:41:47.313 | 99.99th=[44827] 00:41:47.313 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.26, stdev=20.18, samples=19 00:41:47.313 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:41:47.313 lat (usec) : 500=38.68%, 750=11.21% 00:41:47.313 lat (msec) : 50=50.11% 00:41:47.313 cpu : usr=92.19%, sys=7.56%, ctx=7, majf=0, minf=9 00:41:47.313 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:47.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.313 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.313 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:47.313 00:41:47.313 Run status group 0 (all jobs): 00:41:47.313 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10001-10001msec 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.313 00:41:47.313 real 0m10.968s 00:41:47.313 user 0m15.546s 00:41:47.313 sys 0m1.022s 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:47.313 ************************************ 00:41:47.313 END TEST fio_dif_1_default 00:41:47.313 ************************************ 00:41:47.313 10:16:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:47.313 10:16:14 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:47.313 10:16:14 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:47.313 10:16:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:47.313 ************************************ 00:41:47.313 START TEST fio_dif_1_multi_subsystems 00:41:47.313 ************************************ 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:47.313 bdev_null0 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:47.313 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:47.314 [2024-12-07 10:16:14.396195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:47.314 bdev_null1 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:47.314 { 00:41:47.314 "params": { 00:41:47.314 "name": "Nvme$subsystem", 00:41:47.314 "trtype": "$TEST_TRANSPORT", 00:41:47.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:47.314 "adrfam": "ipv4", 00:41:47.314 "trsvcid": "$NVMF_PORT", 00:41:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:47.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:47.314 "hdgst": ${hdgst:-false}, 00:41:47.314 "ddgst": ${ddgst:-false} 00:41:47.314 }, 00:41:47.314 "method": "bdev_nvme_attach_controller" 00:41:47.314 } 00:41:47.314 EOF 00:41:47.314 )") 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:47.314 { 00:41:47.314 "params": { 00:41:47.314 "name": "Nvme$subsystem", 00:41:47.314 "trtype": "$TEST_TRANSPORT", 00:41:47.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:47.314 "adrfam": "ipv4", 00:41:47.314 "trsvcid": "$NVMF_PORT", 00:41:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:47.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:47.314 "hdgst": ${hdgst:-false}, 00:41:47.314 "ddgst": ${ddgst:-false} 00:41:47.314 }, 00:41:47.314 "method": "bdev_nvme_attach_controller" 00:41:47.314 } 00:41:47.314 EOF 00:41:47.314 )") 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:47.314 "params": { 00:41:47.314 "name": "Nvme0", 00:41:47.314 "trtype": "tcp", 00:41:47.314 "traddr": "10.0.0.2", 00:41:47.314 "adrfam": "ipv4", 00:41:47.314 "trsvcid": "4420", 00:41:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:47.314 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:47.314 "hdgst": false, 00:41:47.314 "ddgst": false 00:41:47.314 }, 00:41:47.314 "method": "bdev_nvme_attach_controller" 00:41:47.314 },{ 00:41:47.314 "params": { 00:41:47.314 "name": "Nvme1", 00:41:47.314 "trtype": "tcp", 00:41:47.314 "traddr": "10.0.0.2", 00:41:47.314 "adrfam": "ipv4", 00:41:47.314 "trsvcid": "4420", 00:41:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:47.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:47.314 "hdgst": false, 00:41:47.314 "ddgst": false 00:41:47.314 }, 00:41:47.314 "method": "bdev_nvme_attach_controller" 00:41:47.314 }' 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:47.314 10:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:47.315 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:47.315 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:47.315 fio-3.35 00:41:47.315 Starting 2 threads 00:41:57.276 00:41:57.276 filename0: (groupid=0, jobs=1): err= 0: pid=1566690: Sat Dec 7 10:16:25 2024 00:41:57.276 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10029msec) 00:41:57.276 slat (nsec): min=6063, max=53966, avg=12391.81, stdev=9827.21 00:41:57.276 clat (usec): min=40888, max=42509, avg=41745.22, stdev=402.28 00:41:57.276 lat (usec): min=40895, max=42557, avg=41757.61, stdev=402.86 00:41:57.276 clat percentiles (usec): 00:41:57.276 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:57.276 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:41:57.276 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:57.276 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:41:57.276 | 99.99th=[42730] 00:41:57.276 bw ( KiB/s): min= 352, max= 384, per=34.70%, avg=382.40, stdev= 7.16, samples=20 00:41:57.276 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:41:57.276 lat (msec) : 50=100.00% 00:41:57.276 cpu : usr=99.01%, sys=0.69%, ctx=78, majf=0, minf=139 00:41:57.276 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:57.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.276 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.276 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:57.276 filename1: (groupid=0, jobs=1): err= 0: pid=1566691: Sat Dec 7 10:16:25 2024 00:41:57.276 read: IOPS=179, BW=718KiB/s (735kB/s)(7200KiB/10029msec) 00:41:57.276 slat (nsec): min=6139, max=47306, avg=9322.86, stdev=5824.38 00:41:57.276 clat (usec): min=466, max=42491, avg=22257.64, stdev=20588.09 00:41:57.276 lat (usec): min=472, max=42538, avg=22266.96, stdev=20586.72 00:41:57.276 clat percentiles (usec): 00:41:57.276 | 1.00th=[ 619], 5.00th=[ 627], 10.00th=[ 635], 20.00th=[ 644], 00:41:57.276 | 30.00th=[ 652], 40.00th=[ 668], 50.00th=[41157], 60.00th=[41157], 00:41:57.276 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:57.276 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:41:57.276 | 99.99th=[42730] 00:41:57.276 bw ( KiB/s): min= 352, max= 768, per=65.22%, avg=718.40, stdev=113.01, samples=20 00:41:57.276 iops : min= 88, max= 192, avg=179.60, stdev=28.25, samples=20 00:41:57.276 lat (usec) : 500=0.61%, 750=46.94% 00:41:57.276 lat (msec) : 50=52.44% 00:41:57.276 cpu : usr=97.45%, sys=2.28%, ctx=12, majf=0, minf=71 00:41:57.276 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:57.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.276 issued rwts: total=1800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.276 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:57.276 00:41:57.276 Run status group 0 (all jobs): 00:41:57.276 READ: bw=1101KiB/s (1127kB/s), 383KiB/s-718KiB/s (392kB/s-735kB/s), io=10.8MiB (11.3MB), run=10029-10029msec 00:41:57.276 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:57.276 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:57.276 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:57.277 00:41:57.277 real 0m11.414s 00:41:57.277 user 0m26.694s 00:41:57.277 sys 0m0.617s 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:57.277 10:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:57.277 ************************************ 00:41:57.277 END TEST fio_dif_1_multi_subsystems 00:41:57.277 ************************************ 00:41:57.277 10:16:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:57.277 10:16:25 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:57.277 10:16:25 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:57.277 10:16:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:57.277 ************************************ 00:41:57.277 START TEST fio_dif_rand_params 00:41:57.277 ************************************ 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.277 bdev_null0 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.277 [2024-12-07 10:16:25.887972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:57.277 { 00:41:57.277 "params": { 00:41:57.277 "name": "Nvme$subsystem", 00:41:57.277 "trtype": "$TEST_TRANSPORT", 00:41:57.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:57.277 "adrfam": "ipv4", 00:41:57.277 "trsvcid": "$NVMF_PORT", 00:41:57.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:57.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:57.277 "hdgst": ${hdgst:-false}, 00:41:57.277 "ddgst": ${ddgst:-false} 00:41:57.277 }, 00:41:57.277 "method": "bdev_nvme_attach_controller" 00:41:57.277 } 00:41:57.277 EOF 00:41:57.277 )") 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:57.277 "params": { 00:41:57.277 "name": "Nvme0", 00:41:57.277 "trtype": "tcp", 00:41:57.277 "traddr": "10.0.0.2", 00:41:57.277 "adrfam": "ipv4", 00:41:57.277 "trsvcid": "4420", 00:41:57.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:57.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:57.277 "hdgst": false, 00:41:57.277 "ddgst": false 00:41:57.277 }, 00:41:57.277 "method": "bdev_nvme_attach_controller" 00:41:57.277 }' 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:57.277 10:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:57.535 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:57.535 ... 00:41:57.535 fio-3.35 00:41:57.535 Starting 3 threads 00:42:04.106 00:42:04.106 filename0: (groupid=0, jobs=1): err= 0: pid=1568652: Sat Dec 7 10:16:31 2024 00:42:04.106 read: IOPS=286, BW=35.8MiB/s (37.6MB/s)(179MiB/5004msec) 00:42:04.106 slat (nsec): min=6358, max=25007, avg=10928.14, stdev=2403.66 00:42:04.106 clat (usec): min=3615, max=90010, avg=10446.35, stdev=9563.28 00:42:04.106 lat (usec): min=3622, max=90021, avg=10457.28, stdev=9563.16 00:42:04.106 clat percentiles (usec): 00:42:04.106 | 1.00th=[ 4047], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 6718], 00:42:04.106 | 30.00th=[ 7504], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9110], 00:42:04.106 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10945], 95.00th=[13435], 00:42:04.106 | 99.00th=[50594], 99.50th=[52167], 99.90th=[87557], 99.95th=[89654], 00:42:04.106 | 99.99th=[89654] 00:42:04.106 bw ( KiB/s): min=23296, max=45312, per=32.27%, avg=36684.80, stdev=7773.35, samples=10 00:42:04.106 iops : min= 182, max= 354, avg=286.60, stdev=60.73, samples=10 00:42:04.106 lat (msec) : 4=0.84%, 10=77.98%, 20=16.24%, 50=3.76%, 100=1.18% 00:42:04.106 cpu : usr=94.40%, sys=5.28%, ctx=11, majf=0, minf=28 00:42:04.106 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:04.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:04.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:04.106 issued rwts: total=1435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:04.106 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:04.106 filename0: (groupid=0, jobs=1): err= 0: pid=1568653: Sat Dec 7 10:16:31 2024 00:42:04.106 read: IOPS=331, BW=41.4MiB/s (43.4MB/s)(207MiB/5008msec) 00:42:04.106 slat (nsec): min=2972, max=43630, avg=10548.36, stdev=2610.83 00:42:04.106 clat (usec): min=3214, max=87660, avg=9040.47, stdev=6676.28 00:42:04.106 lat (usec): min=3221, max=87667, avg=9051.02, stdev=6676.37 00:42:04.106 clat percentiles (usec): 00:42:04.106 | 1.00th=[ 3916], 5.00th=[ 4359], 10.00th=[ 5407], 20.00th=[ 6194], 00:42:04.106 | 30.00th=[ 6718], 40.00th=[ 7570], 50.00th=[ 8356], 60.00th=[ 8979], 00:42:04.106 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10683], 95.00th=[11338], 00:42:04.107 | 99.00th=[48497], 99.50th=[49021], 99.90th=[50594], 99.95th=[87557], 00:42:04.107 | 99.99th=[87557] 00:42:04.107 bw ( KiB/s): min=36352, max=50944, per=37.30%, avg=42393.60, stdev=3850.79, samples=10 00:42:04.107 iops : min= 284, max= 398, avg=331.20, stdev=30.08, samples=10 00:42:04.107 lat (msec) : 4=1.51%, 10=78.06%, 20=17.96%, 50=2.35%, 100=0.12% 00:42:04.107 cpu : usr=93.43%, sys=6.25%, ctx=17, majf=0, minf=51 00:42:04.107 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:04.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:04.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:04.107 issued rwts: total=1659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:04.107 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:04.107 filename0: (groupid=0, jobs=1): err= 0: pid=1568654: Sat Dec 7 10:16:31 2024 00:42:04.107 read: IOPS=274, BW=34.3MiB/s (36.0MB/s)(173MiB/5045msec) 00:42:04.107 slat (nsec): min=6369, max=32088, avg=11020.47, stdev=2621.41 00:42:04.107 clat (usec): min=3500, max=52232, avg=10875.25, stdev=9451.92 00:42:04.107 lat (usec): min=3507, max=52245, avg=10886.27, stdev=9451.87 00:42:04.107 clat percentiles (usec): 00:42:04.107 | 1.00th=[ 3916], 5.00th=[ 5342], 10.00th=[ 6128], 20.00th=[ 6915], 00:42:04.107 | 30.00th=[ 7701], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9372], 00:42:04.107 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11600], 95.00th=[45876], 00:42:04.107 | 99.00th=[50594], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:42:04.107 | 99.99th=[52167] 00:42:04.107 bw ( KiB/s): min=24064, max=40704, per=31.17%, avg=35430.40, stdev=5245.33, samples=10 00:42:04.107 iops : min= 188, max= 318, avg=276.80, stdev=40.98, samples=10 00:42:04.107 lat (msec) : 4=1.59%, 10=69.91%, 20=22.73%, 50=4.69%, 100=1.08% 00:42:04.107 cpu : usr=94.21%, sys=5.49%, ctx=7, majf=0, minf=49 00:42:04.107 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:04.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:04.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:04.107 issued rwts: total=1386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:04.107 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:04.107 00:42:04.107 Run status group 0 (all jobs): 00:42:04.107 READ: bw=111MiB/s (116MB/s), 34.3MiB/s-41.4MiB/s (36.0MB/s-43.4MB/s), io=560MiB (587MB), run=5004-5045msec 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 bdev_null0 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 [2024-12-07 10:16:31.920499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 bdev_null1 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 bdev_null2 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:42:04.107 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:04.108 10:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:04.108 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:04.108 10:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:04.108 { 00:42:04.108 "params": { 00:42:04.108 "name": "Nvme$subsystem", 00:42:04.108 "trtype": "$TEST_TRANSPORT", 00:42:04.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:04.108 "adrfam": "ipv4", 00:42:04.108 "trsvcid": "$NVMF_PORT", 00:42:04.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:04.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:04.108 "hdgst": ${hdgst:-false}, 00:42:04.108 "ddgst": ${ddgst:-false} 00:42:04.108 }, 00:42:04.108 "method": "bdev_nvme_attach_controller" 00:42:04.108 } 00:42:04.108 EOF 00:42:04.108 )") 00:42:04.108 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:04.108 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:04.108 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:04.108 10:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:04.108 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:04.108 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:04.108 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:42:04.108 10:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:04.108 10:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:04.108 { 00:42:04.108 "params": { 00:42:04.108 "name": "Nvme$subsystem", 00:42:04.108 "trtype": "$TEST_TRANSPORT", 00:42:04.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:04.108 "adrfam": "ipv4", 00:42:04.108 "trsvcid": "$NVMF_PORT", 00:42:04.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:04.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:04.108 "hdgst": ${hdgst:-false}, 00:42:04.108 "ddgst": ${ddgst:-false} 00:42:04.108 }, 00:42:04.108 "method": "bdev_nvme_attach_controller" 00:42:04.108 } 00:42:04.108 EOF 00:42:04.108 )") 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:04.108 { 00:42:04.108 "params": { 00:42:04.108 "name": "Nvme$subsystem", 00:42:04.108 "trtype": "$TEST_TRANSPORT", 00:42:04.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:04.108 "adrfam": "ipv4", 00:42:04.108 "trsvcid": "$NVMF_PORT", 00:42:04.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:04.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:04.108 "hdgst": ${hdgst:-false}, 00:42:04.108 "ddgst": ${ddgst:-false} 00:42:04.108 }, 00:42:04.108 "method": "bdev_nvme_attach_controller" 00:42:04.108 } 00:42:04.108 EOF 00:42:04.108 )") 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:04.108 "params": { 00:42:04.108 "name": "Nvme0", 00:42:04.108 "trtype": "tcp", 00:42:04.108 "traddr": "10.0.0.2", 00:42:04.108 "adrfam": "ipv4", 00:42:04.108 "trsvcid": "4420", 00:42:04.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:04.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:04.108 "hdgst": false, 00:42:04.108 "ddgst": false 00:42:04.108 }, 00:42:04.108 "method": "bdev_nvme_attach_controller" 00:42:04.108 },{ 00:42:04.108 "params": { 00:42:04.108 "name": "Nvme1", 00:42:04.108 "trtype": "tcp", 00:42:04.108 "traddr": "10.0.0.2", 00:42:04.108 "adrfam": "ipv4", 00:42:04.108 "trsvcid": "4420", 00:42:04.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:04.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:04.108 "hdgst": false, 00:42:04.108 "ddgst": false 00:42:04.108 }, 00:42:04.108 "method": "bdev_nvme_attach_controller" 00:42:04.108 },{ 00:42:04.108 "params": { 00:42:04.108 "name": "Nvme2", 00:42:04.108 "trtype": "tcp", 00:42:04.108 "traddr": "10.0.0.2", 00:42:04.108 "adrfam": "ipv4", 00:42:04.108 "trsvcid": "4420", 00:42:04.108 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:04.108 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:04.108 "hdgst": false, 00:42:04.108 "ddgst": false 00:42:04.108 }, 00:42:04.108 "method": "bdev_nvme_attach_controller" 00:42:04.108 }' 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:04.108 10:16:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:04.108 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:04.108 ... 00:42:04.108 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:04.108 ... 00:42:04.108 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:04.108 ... 00:42:04.108 fio-3.35 00:42:04.108 Starting 24 threads 00:42:16.306 00:42:16.306 filename0: (groupid=0, jobs=1): err= 0: pid=1569696: Sat Dec 7 10:16:43 2024 00:42:16.306 read: IOPS=61, BW=246KiB/s (252kB/s)(2488KiB/10124msec) 00:42:16.306 slat (nsec): min=6846, max=32791, avg=10845.88, stdev=4721.38 00:42:16.306 clat (msec): min=178, max=279, avg=260.06, stdev=27.43 00:42:16.306 lat (msec): min=178, max=279, avg=260.07, stdev=27.43 00:42:16.306 clat percentiles (msec): 00:42:16.306 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 224], 20.00th=[ 253], 00:42:16.306 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:42:16.306 | 70.00th=[ 275], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 279], 00:42:16.306 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:42:16.306 | 99.99th=[ 279] 00:42:16.306 bw ( KiB/s): min= 144, max= 256, per=4.50%, avg=242.40, stdev=34.15, samples=20 00:42:16.306 iops : min= 36, max= 64, avg=60.60, stdev= 8.54, samples=20 00:42:16.306 lat (msec) : 250=17.68%, 500=82.32% 00:42:16.306 cpu : usr=98.96%, sys=0.64%, ctx=5, majf=0, minf=21 00:42:16.306 IO depths : 1=0.3%, 2=6.6%, 4=25.1%, 8=55.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:42:16.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.306 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.306 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.306 filename0: (groupid=0, jobs=1): err= 0: pid=1569697: Sat Dec 7 10:16:43 2024 00:42:16.306 read: IOPS=56, BW=228KiB/s (233kB/s)(2304KiB/10124msec) 00:42:16.306 slat (usec): min=6, max=107, avg=21.49, stdev=23.49 00:42:16.306 clat (msec): min=189, max=388, avg=281.00, stdev=37.90 00:42:16.306 lat (msec): min=189, max=388, avg=281.02, stdev=37.92 00:42:16.306 clat percentiles (msec): 00:42:16.306 | 1.00th=[ 190], 5.00th=[ 190], 10.00th=[ 266], 20.00th=[ 271], 00:42:16.306 | 30.00th=[ 271], 40.00th=[ 275], 50.00th=[ 275], 60.00th=[ 275], 00:42:16.306 | 70.00th=[ 279], 80.00th=[ 279], 90.00th=[ 342], 95.00th=[ 384], 00:42:16.306 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:42:16.306 | 99.99th=[ 388] 00:42:16.306 bw ( KiB/s): min= 128, max= 256, per=4.14%, avg=224.00, stdev=56.87, samples=20 00:42:16.306 iops : min= 32, max= 64, avg=56.00, stdev=14.22, samples=20 00:42:16.306 lat (msec) : 250=5.56%, 500=94.44% 00:42:16.306 cpu : usr=98.78%, sys=0.83%, ctx=34, majf=0, minf=19 00:42:16.306 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:16.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.306 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.306 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.306 filename0: (groupid=0, jobs=1): err= 0: pid=1569698: Sat Dec 7 10:16:43 2024 00:42:16.306 read: IOPS=62, BW=252KiB/s (258kB/s)(2560KiB/10166msec) 00:42:16.306 slat (nsec): min=3133, max=19321, avg=8616.37, stdev=2240.79 00:42:16.306 clat (msec): min=48, max=280, avg=253.17, stdev=50.04 00:42:16.306 lat (msec): min=48, max=280, avg=253.17, stdev=50.04 00:42:16.306 clat percentiles (msec): 00:42:16.306 | 1.00th=[ 49], 5.00th=[ 62], 10.00th=[ 211], 20.00th=[ 253], 00:42:16.306 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 275], 00:42:16.306 | 70.00th=[ 275], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 279], 00:42:16.306 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:42:16.306 | 99.99th=[ 279] 00:42:16.306 bw ( KiB/s): min= 144, max= 384, per=4.63%, avg=249.60, stdev=59.05, samples=20 00:42:16.306 iops : min= 36, max= 96, avg=62.40, stdev=14.76, samples=20 00:42:16.306 lat (msec) : 50=2.50%, 100=2.50%, 250=10.63%, 500=84.38% 00:42:16.306 cpu : usr=98.84%, sys=0.79%, ctx=14, majf=0, minf=20 00:42:16.306 IO depths : 1=0.8%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:42:16.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.306 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.306 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.306 filename0: (groupid=0, jobs=1): err= 0: pid=1569699: Sat Dec 7 10:16:43 2024 00:42:16.306 read: IOPS=61, BW=247KiB/s (253kB/s)(2504KiB/10141msec) 00:42:16.306 slat (nsec): min=4163, max=24787, avg=9218.98, stdev=3067.30 00:42:16.306 clat (msec): min=48, max=432, avg=257.94, stdev=53.22 00:42:16.306 lat (msec): min=48, max=432, avg=257.95, stdev=53.22 00:42:16.306 clat percentiles (msec): 00:42:16.306 | 1.00th=[ 49], 5.00th=[ 113], 10.00th=[ 224], 20.00th=[ 257], 00:42:16.306 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:42:16.306 | 70.00th=[ 275], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 279], 00:42:16.306 | 99.00th=[ 359], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:42:16.306 | 99.99th=[ 435] 00:42:16.306 bw ( KiB/s): min= 176, max= 384, per=4.52%, avg=244.00, stdev=44.92, samples=20 00:42:16.306 iops : min= 44, max= 96, avg=61.00, stdev=11.23, samples=20 00:42:16.306 lat (msec) : 50=2.24%, 100=2.56%, 250=13.42%, 500=81.79% 00:42:16.306 cpu : usr=98.86%, sys=0.77%, ctx=12, majf=0, minf=21 00:42:16.306 IO depths : 1=0.3%, 2=1.3%, 4=8.9%, 8=77.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:42:16.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.306 complete : 0=0.0%, 4=89.5%, 8=5.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.306 issued rwts: total=626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.306 filename0: (groupid=0, jobs=1): err= 0: pid=1569700: Sat Dec 7 10:16:43 2024 00:42:16.306 read: IOPS=60, BW=241KiB/s (246kB/s)(2432KiB/10109msec) 00:42:16.306 slat (nsec): min=6007, max=53182, avg=8925.43, stdev=3575.30 00:42:16.306 clat (msec): min=180, max=354, avg=265.94, stdev=26.78 00:42:16.306 lat (msec): min=180, max=354, avg=265.95, stdev=26.78 00:42:16.306 clat percentiles (msec): 00:42:16.306 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 224], 20.00th=[ 264], 00:42:16.306 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:42:16.306 | 70.00th=[ 275], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 279], 00:42:16.306 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 355], 00:42:16.306 | 99.99th=[ 355] 00:42:16.306 bw ( KiB/s): min= 128, max= 384, per=4.39%, avg=236.80, stdev=62.64, samples=20 00:42:16.306 iops : min= 32, max= 96, avg=59.20, stdev=15.66, samples=20 00:42:16.306 lat (msec) : 250=13.16%, 500=86.84% 00:42:16.306 cpu : usr=98.80%, sys=0.84%, ctx=13, majf=0, minf=14 00:42:16.306 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:16.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.306 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.306 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.306 filename0: (groupid=0, jobs=1): err= 0: pid=1569701: Sat Dec 7 10:16:43 2024 00:42:16.306 read: IOPS=61, BW=246KiB/s (252kB/s)(2488KiB/10121msec) 00:42:16.306 slat (nsec): min=6815, max=51383, avg=9848.95, stdev=4431.25 00:42:16.306 clat (msec): min=178, max=345, avg=259.98, stdev=28.53 00:42:16.306 lat (msec): min=178, max=345, avg=259.99, stdev=28.53 00:42:16.306 clat percentiles (msec): 00:42:16.306 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 224], 20.00th=[ 251], 00:42:16.306 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:42:16.306 | 70.00th=[ 275], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 279], 00:42:16.306 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 347], 99.95th=[ 347], 00:42:16.306 | 99.99th=[ 347] 00:42:16.306 bw ( KiB/s): min= 144, max= 256, per=4.50%, avg=242.40, stdev=34.15, samples=20 00:42:16.306 iops : min= 36, max= 64, avg=60.60, stdev= 8.54, samples=20 00:42:16.306 lat (msec) : 250=18.01%, 500=81.99% 00:42:16.306 cpu : usr=98.65%, sys=0.98%, ctx=13, majf=0, minf=21 00:42:16.306 IO depths : 1=0.6%, 2=6.9%, 4=25.1%, 8=55.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:42:16.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.306 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.306 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.306 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.306 filename0: (groupid=0, jobs=1): err= 0: pid=1569702: Sat Dec 7 10:16:43 2024 00:42:16.306 read: IOPS=56, BW=227KiB/s (233kB/s)(2296KiB/10100msec) 00:42:16.306 slat (nsec): min=4240, max=65463, avg=10775.89, stdev=8206.89 00:42:16.306 clat (msec): min=189, max=505, avg=281.22, stdev=44.98 00:42:16.306 lat (msec): min=189, max=505, avg=281.23, stdev=44.98 00:42:16.306 clat percentiles (msec): 00:42:16.306 | 1.00th=[ 190], 5.00th=[ 236], 10.00th=[ 264], 20.00th=[ 268], 00:42:16.306 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:42:16.306 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 334], 95.00th=[ 384], 00:42:16.307 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 506], 99.95th=[ 506], 00:42:16.307 | 99.99th=[ 506] 00:42:16.307 bw ( KiB/s): min= 112, max= 256, per=4.14%, avg=223.20, stdev=53.31, samples=20 00:42:16.307 iops : min= 28, max= 64, avg=55.80, stdev=13.33, samples=20 00:42:16.307 lat (msec) : 250=5.23%, 500=94.43%, 750=0.35% 00:42:16.307 cpu : usr=98.76%, sys=0.87%, ctx=13, majf=0, minf=17 00:42:16.307 IO depths : 1=0.3%, 2=6.6%, 4=25.1%, 8=55.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:42:16.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.307 filename0: (groupid=0, jobs=1): err= 0: pid=1569703: Sat Dec 7 10:16:43 2024 00:42:16.307 read: IOPS=53, BW=213KiB/s (218kB/s)(2152KiB/10097msec) 00:42:16.307 slat (nsec): min=5520, max=65573, avg=11161.01, stdev=7723.72 00:42:16.307 clat (msec): min=194, max=517, avg=299.40, stdev=62.01 00:42:16.307 lat (msec): min=194, max=517, avg=299.41, stdev=62.01 00:42:16.307 clat percentiles (msec): 00:42:16.307 | 1.00th=[ 194], 5.00th=[ 253], 10.00th=[ 257], 20.00th=[ 264], 00:42:16.307 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:42:16.307 | 70.00th=[ 288], 80.00th=[ 359], 90.00th=[ 422], 95.00th=[ 435], 00:42:16.307 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 518], 99.95th=[ 518], 00:42:16.307 | 99.99th=[ 518] 00:42:16.307 bw ( KiB/s): min= 128, max= 256, per=3.86%, avg=208.80, stdev=44.80, samples=20 00:42:16.307 iops : min= 32, max= 64, avg=52.20, stdev=11.20, samples=20 00:42:16.307 lat (msec) : 250=4.46%, 500=95.17%, 750=0.37% 00:42:16.307 cpu : usr=98.99%, sys=0.63%, ctx=11, majf=0, minf=15 00:42:16.307 IO depths : 1=0.9%, 2=2.4%, 4=9.7%, 8=74.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:42:16.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 issued rwts: total=538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.307 filename1: (groupid=0, jobs=1): err= 0: pid=1569704: Sat Dec 7 10:16:43 2024 00:42:16.307 read: IOPS=41, BW=164KiB/s (168kB/s)(1656KiB/10097msec) 00:42:16.307 slat (nsec): min=6071, max=41529, avg=8865.30, stdev=3641.55 00:42:16.307 clat (msec): min=193, max=549, avg=390.02, stdev=67.47 00:42:16.307 lat (msec): min=193, max=549, avg=390.03, stdev=67.47 00:42:16.307 clat percentiles (msec): 00:42:16.307 | 1.00th=[ 194], 5.00th=[ 224], 10.00th=[ 321], 20.00th=[ 376], 00:42:16.307 | 30.00th=[ 380], 40.00th=[ 380], 50.00th=[ 384], 60.00th=[ 418], 00:42:16.307 | 70.00th=[ 439], 80.00th=[ 439], 90.00th=[ 443], 95.00th=[ 460], 00:42:16.307 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:42:16.307 | 99.99th=[ 550] 00:42:16.307 bw ( KiB/s): min= 112, max= 256, per=2.95%, avg=159.20, stdev=52.29, samples=20 00:42:16.307 iops : min= 28, max= 64, avg=39.80, stdev=13.07, samples=20 00:42:16.307 lat (msec) : 250=6.76%, 500=90.82%, 750=2.42% 00:42:16.307 cpu : usr=98.70%, sys=0.92%, ctx=19, majf=0, minf=19 00:42:16.307 IO depths : 1=4.1%, 2=10.4%, 4=25.1%, 8=52.2%, 16=8.2%, 32=0.0%, >=64=0.0% 00:42:16.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 issued rwts: total=414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.307 filename1: (groupid=0, jobs=1): err= 0: pid=1569705: Sat Dec 7 10:16:43 2024 00:42:16.307 read: IOPS=54, BW=219KiB/s (225kB/s)(2216KiB/10097msec) 00:42:16.307 slat (nsec): min=5128, max=20013, avg=8755.42, stdev=2452.94 00:42:16.307 clat (msec): min=219, max=459, avg=290.92, stdev=59.21 00:42:16.307 lat (msec): min=219, max=459, avg=290.93, stdev=59.21 00:42:16.307 clat percentiles (msec): 00:42:16.307 | 1.00th=[ 220], 5.00th=[ 234], 10.00th=[ 236], 20.00th=[ 245], 00:42:16.307 | 30.00th=[ 262], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:42:16.307 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 384], 95.00th=[ 439], 00:42:16.307 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 460], 99.95th=[ 460], 00:42:16.307 | 99.99th=[ 460] 00:42:16.307 bw ( KiB/s): min= 128, max= 304, per=3.99%, avg=215.20, stdev=48.27, samples=20 00:42:16.307 iops : min= 32, max= 76, avg=53.80, stdev=12.07, samples=20 00:42:16.307 lat (msec) : 250=28.16%, 500=71.84% 00:42:16.307 cpu : usr=98.77%, sys=0.86%, ctx=8, majf=0, minf=13 00:42:16.307 IO depths : 1=0.4%, 2=0.9%, 4=6.9%, 8=79.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:42:16.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 complete : 0=0.0%, 4=88.7%, 8=6.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 issued rwts: total=554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.307 filename1: (groupid=0, jobs=1): err= 0: pid=1569706: Sat Dec 7 10:16:43 2024 00:42:16.307 read: IOPS=57, BW=229KiB/s (234kB/s)(2312KiB/10101msec) 00:42:16.307 slat (nsec): min=4162, max=60627, avg=9463.01, stdev=5292.00 00:42:16.307 clat (msec): min=180, max=483, avg=278.76, stdev=39.80 00:42:16.307 lat (msec): min=180, max=483, avg=278.77, stdev=39.80 00:42:16.307 clat percentiles (msec): 00:42:16.307 | 1.00th=[ 182], 5.00th=[ 234], 10.00th=[ 253], 20.00th=[ 271], 00:42:16.307 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:42:16.307 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 334], 95.00th=[ 384], 00:42:16.307 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 485], 99.95th=[ 485], 00:42:16.307 | 99.99th=[ 485] 00:42:16.307 bw ( KiB/s): min= 128, max= 304, per=4.16%, avg=224.80, stdev=47.43, samples=20 00:42:16.307 iops : min= 32, max= 76, avg=56.20, stdev=11.86, samples=20 00:42:16.307 lat (msec) : 250=9.69%, 500=90.31% 00:42:16.307 cpu : usr=98.67%, sys=0.97%, ctx=9, majf=0, minf=12 00:42:16.307 IO depths : 1=0.7%, 2=1.6%, 4=8.7%, 8=77.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:42:16.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 complete : 0=0.0%, 4=89.4%, 8=5.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 issued rwts: total=578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.307 filename1: (groupid=0, jobs=1): err= 0: pid=1569707: Sat Dec 7 10:16:43 2024 00:42:16.307 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10121msec) 00:42:16.307 slat (nsec): min=6833, max=37899, avg=9291.82, stdev=4067.12 00:42:16.307 clat (msec): min=187, max=356, avg=265.71, stdev=23.24 00:42:16.307 lat (msec): min=187, max=356, avg=265.72, stdev=23.24 00:42:16.307 clat percentiles (msec): 00:42:16.307 | 1.00th=[ 188], 5.00th=[ 205], 10.00th=[ 234], 20.00th=[ 262], 00:42:16.307 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:42:16.307 | 70.00th=[ 275], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 279], 00:42:16.307 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 355], 99.95th=[ 355], 00:42:16.307 | 99.99th=[ 355] 00:42:16.307 bw ( KiB/s): min= 176, max= 336, per=4.39%, avg=236.80, stdev=39.40, samples=20 00:42:16.307 iops : min= 44, max= 84, avg=59.20, stdev= 9.85, samples=20 00:42:16.307 lat (msec) : 250=12.50%, 500=87.50% 00:42:16.307 cpu : usr=98.87%, sys=0.76%, ctx=12, majf=0, minf=15 00:42:16.307 IO depths : 1=0.3%, 2=1.3%, 4=9.2%, 8=77.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:42:16.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 complete : 0=0.0%, 4=89.6%, 8=4.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.307 filename1: (groupid=0, jobs=1): err= 0: pid=1569708: Sat Dec 7 10:16:43 2024 00:42:16.307 read: IOPS=61, BW=246KiB/s (252kB/s)(2488KiB/10123msec) 00:42:16.307 slat (nsec): min=6820, max=32251, avg=10772.00, stdev=4270.69 00:42:16.307 clat (msec): min=133, max=328, avg=259.98, stdev=32.48 00:42:16.307 lat (msec): min=133, max=328, avg=259.99, stdev=32.48 00:42:16.307 clat percentiles (msec): 00:42:16.307 | 1.00th=[ 134], 5.00th=[ 182], 10.00th=[ 224], 20.00th=[ 262], 00:42:16.307 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:42:16.307 | 70.00th=[ 275], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 279], 00:42:16.307 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 330], 99.95th=[ 330], 00:42:16.307 | 99.99th=[ 330] 00:42:16.307 bw ( KiB/s): min= 144, max= 256, per=4.50%, avg=242.40, stdev=34.15, samples=20 00:42:16.307 iops : min= 36, max= 64, avg=60.60, stdev= 8.54, samples=20 00:42:16.307 lat (msec) : 250=15.76%, 500=84.24% 00:42:16.307 cpu : usr=98.72%, sys=0.92%, ctx=5, majf=0, minf=32 00:42:16.307 IO depths : 1=0.3%, 2=6.6%, 4=25.1%, 8=55.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:42:16.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.307 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.307 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.307 filename1: (groupid=0, jobs=1): err= 0: pid=1569709: Sat Dec 7 10:16:43 2024 00:42:16.307 read: IOPS=59, BW=238KiB/s (244kB/s)(2408KiB/10121msec) 00:42:16.307 slat (nsec): min=6459, max=53173, avg=10146.74, stdev=4683.59 00:42:16.307 clat (msec): min=178, max=431, avg=267.63, stdev=44.18 00:42:16.307 lat (msec): min=178, max=431, avg=267.64, stdev=44.17 00:42:16.307 clat percentiles (msec): 00:42:16.307 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 218], 20.00th=[ 243], 00:42:16.307 | 30.00th=[ 264], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:42:16.307 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 321], 95.00th=[ 347], 00:42:16.307 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:42:16.307 | 99.99th=[ 430] 00:42:16.307 bw ( KiB/s): min= 176, max= 304, per=4.35%, avg=234.40, stdev=37.53, samples=20 00:42:16.307 iops : min= 44, max= 76, avg=58.60, stdev= 9.38, samples=20 00:42:16.307 lat (msec) : 250=22.92%, 500=77.08% 00:42:16.307 cpu : usr=98.63%, sys=1.01%, ctx=11, majf=0, minf=17 00:42:16.307 IO depths : 1=0.5%, 2=1.8%, 4=9.8%, 8=75.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:42:16.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 complete : 0=0.0%, 4=89.7%, 8=5.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 issued rwts: total=602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.308 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.308 filename1: (groupid=0, jobs=1): err= 0: pid=1569710: Sat Dec 7 10:16:43 2024 00:42:16.308 read: IOPS=55, BW=223KiB/s (229kB/s)(2256KiB/10109msec) 00:42:16.308 slat (nsec): min=6831, max=20457, avg=9024.68, stdev=2510.37 00:42:16.308 clat (msec): min=180, max=445, avg=286.14, stdev=56.43 00:42:16.308 lat (msec): min=180, max=445, avg=286.15, stdev=56.43 00:42:16.308 clat percentiles (msec): 00:42:16.308 | 1.00th=[ 182], 5.00th=[ 218], 10.00th=[ 239], 20.00th=[ 245], 00:42:16.308 | 30.00th=[ 255], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:42:16.308 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 380], 95.00th=[ 397], 00:42:16.308 | 99.00th=[ 439], 99.50th=[ 447], 99.90th=[ 447], 99.95th=[ 447], 00:42:16.308 | 99.99th=[ 447] 00:42:16.308 bw ( KiB/s): min= 128, max= 272, per=4.07%, avg=219.20, stdev=41.24, samples=20 00:42:16.308 iops : min= 32, max= 68, avg=54.80, stdev=10.31, samples=20 00:42:16.308 lat (msec) : 250=23.76%, 500=76.24% 00:42:16.308 cpu : usr=98.70%, sys=0.92%, ctx=9, majf=0, minf=23 00:42:16.308 IO depths : 1=0.7%, 2=2.3%, 4=10.1%, 8=74.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:42:16.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 issued rwts: total=564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.308 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.308 filename1: (groupid=0, jobs=1): err= 0: pid=1569711: Sat Dec 7 10:16:43 2024 00:42:16.308 read: IOPS=61, BW=246KiB/s (252kB/s)(2488KiB/10124msec) 00:42:16.308 slat (nsec): min=6818, max=32032, avg=11568.68, stdev=4945.91 00:42:16.308 clat (msec): min=178, max=279, avg=260.02, stdev=27.49 00:42:16.308 lat (msec): min=178, max=279, avg=260.03, stdev=27.49 00:42:16.308 clat percentiles (msec): 00:42:16.308 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 224], 20.00th=[ 251], 00:42:16.308 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:42:16.308 | 70.00th=[ 275], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 279], 00:42:16.308 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:42:16.308 | 99.99th=[ 279] 00:42:16.308 bw ( KiB/s): min= 144, max= 256, per=4.50%, avg=242.40, stdev=34.15, samples=20 00:42:16.308 iops : min= 36, max= 64, avg=60.60, stdev= 8.54, samples=20 00:42:16.308 lat (msec) : 250=18.01%, 500=81.99% 00:42:16.308 cpu : usr=98.72%, sys=0.91%, ctx=13, majf=0, minf=29 00:42:16.308 IO depths : 1=0.3%, 2=6.6%, 4=25.1%, 8=55.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:42:16.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.308 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.308 filename2: (groupid=0, jobs=1): err= 0: pid=1569712: Sat Dec 7 10:16:43 2024 00:42:16.308 read: IOPS=53, BW=213KiB/s (218kB/s)(2152KiB/10108msec) 00:42:16.308 slat (nsec): min=4158, max=54402, avg=13013.08, stdev=9254.56 00:42:16.308 clat (msec): min=180, max=512, avg=299.65, stdev=60.24 00:42:16.308 lat (msec): min=180, max=512, avg=299.66, stdev=60.24 00:42:16.308 clat percentiles (msec): 00:42:16.308 | 1.00th=[ 182], 5.00th=[ 253], 10.00th=[ 253], 20.00th=[ 262], 00:42:16.308 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 288], 00:42:16.308 | 70.00th=[ 292], 80.00th=[ 359], 90.00th=[ 397], 95.00th=[ 435], 00:42:16.308 | 99.00th=[ 439], 99.50th=[ 468], 99.90th=[ 514], 99.95th=[ 514], 00:42:16.308 | 99.99th=[ 514] 00:42:16.308 bw ( KiB/s): min= 128, max= 256, per=3.86%, avg=208.80, stdev=46.86, samples=20 00:42:16.308 iops : min= 32, max= 64, avg=52.20, stdev=11.71, samples=20 00:42:16.308 lat (msec) : 250=4.09%, 500=95.54%, 750=0.37% 00:42:16.308 cpu : usr=99.11%, sys=0.49%, ctx=26, majf=0, minf=15 00:42:16.308 IO depths : 1=0.7%, 2=2.2%, 4=9.7%, 8=74.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:42:16.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 complete : 0=0.0%, 4=89.4%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 issued rwts: total=538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.308 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.308 filename2: (groupid=0, jobs=1): err= 0: pid=1569713: Sat Dec 7 10:16:43 2024 00:42:16.308 read: IOPS=39, BW=159KiB/s (163kB/s)(1600KiB/10071msec) 00:42:16.308 slat (nsec): min=6847, max=52035, avg=9693.65, stdev=5671.85 00:42:16.308 clat (msec): min=256, max=559, avg=402.74, stdev=58.18 00:42:16.308 lat (msec): min=256, max=559, avg=402.75, stdev=58.18 00:42:16.308 clat percentiles (msec): 00:42:16.308 | 1.00th=[ 264], 5.00th=[ 271], 10.00th=[ 342], 20.00th=[ 376], 00:42:16.308 | 30.00th=[ 380], 40.00th=[ 380], 50.00th=[ 384], 60.00th=[ 435], 00:42:16.308 | 70.00th=[ 439], 80.00th=[ 439], 90.00th=[ 447], 95.00th=[ 468], 00:42:16.308 | 99.00th=[ 558], 99.50th=[ 558], 99.90th=[ 558], 99.95th=[ 558], 00:42:16.308 | 99.99th=[ 558] 00:42:16.308 bw ( KiB/s): min= 112, max= 256, per=2.84%, avg=153.60, stdev=49.08, samples=20 00:42:16.308 iops : min= 28, max= 64, avg=38.40, stdev=12.27, samples=20 00:42:16.308 lat (msec) : 500=95.00%, 750=5.00% 00:42:16.308 cpu : usr=98.75%, sys=0.88%, ctx=13, majf=0, minf=21 00:42:16.308 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:42:16.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.308 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.308 filename2: (groupid=0, jobs=1): err= 0: pid=1569714: Sat Dec 7 10:16:43 2024 00:42:16.308 read: IOPS=57, BW=232KiB/s (238kB/s)(2344KiB/10104msec) 00:42:16.308 slat (nsec): min=6815, max=30643, avg=8844.36, stdev=2729.98 00:42:16.308 clat (msec): min=180, max=454, avg=274.69, stdev=38.11 00:42:16.308 lat (msec): min=180, max=454, avg=274.70, stdev=38.11 00:42:16.308 clat percentiles (msec): 00:42:16.308 | 1.00th=[ 182], 5.00th=[ 230], 10.00th=[ 234], 20.00th=[ 259], 00:42:16.308 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:42:16.308 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 309], 95.00th=[ 351], 00:42:16.308 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 456], 99.95th=[ 456], 00:42:16.308 | 99.99th=[ 456] 00:42:16.308 bw ( KiB/s): min= 128, max= 304, per=4.24%, avg=228.00, stdev=44.92, samples=20 00:42:16.308 iops : min= 32, max= 76, avg=57.00, stdev=11.23, samples=20 00:42:16.308 lat (msec) : 250=17.75%, 500=82.25% 00:42:16.308 cpu : usr=98.84%, sys=0.79%, ctx=9, majf=0, minf=15 00:42:16.308 IO depths : 1=0.2%, 2=0.5%, 4=6.8%, 8=79.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:42:16.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 complete : 0=0.0%, 4=88.8%, 8=6.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 issued rwts: total=586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.308 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.308 filename2: (groupid=0, jobs=1): err= 0: pid=1569715: Sat Dec 7 10:16:43 2024 00:42:16.308 read: IOPS=57, BW=229KiB/s (235kB/s)(2312KiB/10092msec) 00:42:16.308 slat (nsec): min=6822, max=45373, avg=9448.24, stdev=4159.46 00:42:16.308 clat (msec): min=203, max=489, avg=278.55, stdev=43.51 00:42:16.308 lat (msec): min=203, max=489, avg=278.56, stdev=43.51 00:42:16.308 clat percentiles (msec): 00:42:16.308 | 1.00th=[ 205], 5.00th=[ 234], 10.00th=[ 253], 20.00th=[ 271], 00:42:16.308 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:42:16.308 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 284], 95.00th=[ 388], 00:42:16.308 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 489], 99.95th=[ 489], 00:42:16.308 | 99.99th=[ 489] 00:42:16.308 bw ( KiB/s): min= 128, max= 304, per=4.16%, avg=224.80, stdev=47.43, samples=20 00:42:16.308 iops : min= 32, max= 76, avg=56.20, stdev=11.86, samples=20 00:42:16.308 lat (msec) : 250=9.69%, 500=90.31% 00:42:16.308 cpu : usr=98.67%, sys=0.95%, ctx=9, majf=0, minf=23 00:42:16.308 IO depths : 1=0.7%, 2=1.6%, 4=8.7%, 8=77.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:42:16.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 complete : 0=0.0%, 4=89.4%, 8=5.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 issued rwts: total=578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.308 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.308 filename2: (groupid=0, jobs=1): err= 0: pid=1569716: Sat Dec 7 10:16:43 2024 00:42:16.308 read: IOPS=41, BW=165KiB/s (169kB/s)(1664KiB/10096msec) 00:42:16.308 slat (nsec): min=6825, max=23327, avg=9136.40, stdev=2770.55 00:42:16.308 clat (msec): min=203, max=552, avg=388.10, stdev=74.40 00:42:16.308 lat (msec): min=203, max=552, avg=388.11, stdev=74.40 00:42:16.308 clat percentiles (msec): 00:42:16.308 | 1.00th=[ 205], 5.00th=[ 218], 10.00th=[ 279], 20.00th=[ 347], 00:42:16.308 | 30.00th=[ 376], 40.00th=[ 380], 50.00th=[ 384], 60.00th=[ 439], 00:42:16.308 | 70.00th=[ 439], 80.00th=[ 439], 90.00th=[ 451], 95.00th=[ 460], 00:42:16.308 | 99.00th=[ 542], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:42:16.308 | 99.99th=[ 550] 00:42:16.308 bw ( KiB/s): min= 112, max= 256, per=2.97%, avg=160.00, stdev=53.95, samples=20 00:42:16.308 iops : min= 28, max= 64, avg=40.00, stdev=13.49, samples=20 00:42:16.308 lat (msec) : 250=7.21%, 500=87.98%, 750=4.81% 00:42:16.308 cpu : usr=98.61%, sys=1.02%, ctx=12, majf=0, minf=18 00:42:16.308 IO depths : 1=3.1%, 2=9.1%, 4=24.3%, 8=54.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:42:16.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.308 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.308 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.308 filename2: (groupid=0, jobs=1): err= 0: pid=1569717: Sat Dec 7 10:16:43 2024 00:42:16.308 read: IOPS=61, BW=247KiB/s (252kB/s)(2496KiB/10124msec) 00:42:16.309 slat (nsec): min=6830, max=33414, avg=11488.49, stdev=5237.94 00:42:16.309 clat (msec): min=178, max=279, avg=259.47, stdev=28.26 00:42:16.309 lat (msec): min=178, max=279, avg=259.48, stdev=28.26 00:42:16.309 clat percentiles (msec): 00:42:16.309 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 251], 00:42:16.309 | 30.00th=[ 266], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:42:16.309 | 70.00th=[ 275], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 279], 00:42:16.309 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:42:16.309 | 99.99th=[ 279] 00:42:16.309 bw ( KiB/s): min= 128, max= 256, per=4.52%, avg=243.20, stdev=39.40, samples=20 00:42:16.309 iops : min= 32, max= 64, avg=60.80, stdev= 9.85, samples=20 00:42:16.309 lat (msec) : 250=17.95%, 500=82.05% 00:42:16.309 cpu : usr=98.12%, sys=1.51%, ctx=22, majf=0, minf=16 00:42:16.309 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:16.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.309 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.309 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.309 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.309 filename2: (groupid=0, jobs=1): err= 0: pid=1569718: Sat Dec 7 10:16:43 2024 00:42:16.309 read: IOPS=58, BW=236KiB/s (241kB/s)(2392KiB/10150msec) 00:42:16.309 slat (nsec): min=6531, max=34615, avg=16332.40, stdev=4346.55 00:42:16.309 clat (msec): min=48, max=441, avg=270.47, stdev=74.03 00:42:16.309 lat (msec): min=48, max=441, avg=270.49, stdev=74.03 00:42:16.309 clat percentiles (msec): 00:42:16.309 | 1.00th=[ 49], 5.00th=[ 61], 10.00th=[ 201], 20.00th=[ 245], 00:42:16.309 | 30.00th=[ 247], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 275], 00:42:16.309 | 70.00th=[ 279], 80.00th=[ 305], 90.00th=[ 380], 95.00th=[ 435], 00:42:16.309 | 99.00th=[ 439], 99.50th=[ 443], 99.90th=[ 443], 99.95th=[ 443], 00:42:16.309 | 99.99th=[ 443] 00:42:16.309 bw ( KiB/s): min= 176, max= 384, per=4.31%, avg=232.80, stdev=47.99, samples=20 00:42:16.309 iops : min= 44, max= 96, avg=58.20, stdev=12.00, samples=20 00:42:16.309 lat (msec) : 50=2.68%, 100=2.68%, 250=27.09%, 500=67.56% 00:42:16.309 cpu : usr=98.59%, sys=1.01%, ctx=10, majf=0, minf=22 00:42:16.309 IO depths : 1=0.7%, 2=2.2%, 4=9.7%, 8=74.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:42:16.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.309 complete : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.309 issued rwts: total=598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.309 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.309 filename2: (groupid=0, jobs=1): err= 0: pid=1569719: Sat Dec 7 10:16:43 2024 00:42:16.309 read: IOPS=56, BW=224KiB/s (229kB/s)(2272KiB/10142msec) 00:42:16.309 slat (usec): min=6, max=100, avg=21.86, stdev=24.55 00:42:16.309 clat (msec): min=47, max=482, avg=285.21, stdev=81.43 00:42:16.309 lat (msec): min=47, max=482, avg=285.23, stdev=81.43 00:42:16.309 clat percentiles (msec): 00:42:16.309 | 1.00th=[ 48], 5.00th=[ 59], 10.00th=[ 197], 20.00th=[ 268], 00:42:16.309 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:42:16.309 | 70.00th=[ 284], 80.00th=[ 347], 90.00th=[ 388], 95.00th=[ 439], 00:42:16.309 | 99.00th=[ 447], 99.50th=[ 447], 99.90th=[ 481], 99.95th=[ 481], 00:42:16.309 | 99.99th=[ 481] 00:42:16.309 bw ( KiB/s): min= 128, max= 384, per=4.09%, avg=220.80, stdev=59.32, samples=20 00:42:16.309 iops : min= 32, max= 96, avg=55.20, stdev=14.83, samples=20 00:42:16.309 lat (msec) : 50=2.82%, 100=2.82%, 250=4.93%, 500=89.44% 00:42:16.309 cpu : usr=98.71%, sys=0.90%, ctx=11, majf=0, minf=25 00:42:16.309 IO depths : 1=1.6%, 2=4.4%, 4=13.9%, 8=68.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:42:16.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.309 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.309 issued rwts: total=568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.309 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:16.309 00:42:16.309 Run status group 0 (all jobs): 00:42:16.309 READ: bw=5382KiB/s (5511kB/s), 159KiB/s-252KiB/s (163kB/s-258kB/s), io=53.4MiB (56.0MB), run=10071-10166msec 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.309 bdev_null0 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.309 [2024-12-07 10:16:43.680455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.309 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.309 bdev_null1 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:16.310 { 00:42:16.310 "params": { 00:42:16.310 "name": "Nvme$subsystem", 00:42:16.310 "trtype": "$TEST_TRANSPORT", 00:42:16.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:16.310 "adrfam": "ipv4", 00:42:16.310 "trsvcid": "$NVMF_PORT", 00:42:16.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:16.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:16.310 "hdgst": ${hdgst:-false}, 00:42:16.310 "ddgst": ${ddgst:-false} 00:42:16.310 }, 00:42:16.310 "method": "bdev_nvme_attach_controller" 00:42:16.310 } 00:42:16.310 EOF 00:42:16.310 )") 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:16.310 { 00:42:16.310 "params": { 00:42:16.310 "name": "Nvme$subsystem", 00:42:16.310 "trtype": "$TEST_TRANSPORT", 00:42:16.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:16.310 "adrfam": "ipv4", 00:42:16.310 "trsvcid": "$NVMF_PORT", 00:42:16.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:16.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:16.310 "hdgst": ${hdgst:-false}, 00:42:16.310 "ddgst": ${ddgst:-false} 00:42:16.310 }, 00:42:16.310 "method": "bdev_nvme_attach_controller" 00:42:16.310 } 00:42:16.310 EOF 00:42:16.310 )") 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:16.310 "params": { 00:42:16.310 "name": "Nvme0", 00:42:16.310 "trtype": "tcp", 00:42:16.310 "traddr": "10.0.0.2", 00:42:16.310 "adrfam": "ipv4", 00:42:16.310 "trsvcid": "4420", 00:42:16.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:16.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:16.310 "hdgst": false, 00:42:16.310 "ddgst": false 00:42:16.310 }, 00:42:16.310 "method": "bdev_nvme_attach_controller" 00:42:16.310 },{ 00:42:16.310 "params": { 00:42:16.310 "name": "Nvme1", 00:42:16.310 "trtype": "tcp", 00:42:16.310 "traddr": "10.0.0.2", 00:42:16.310 "adrfam": "ipv4", 00:42:16.310 "trsvcid": "4420", 00:42:16.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:16.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:16.310 "hdgst": false, 00:42:16.310 "ddgst": false 00:42:16.310 }, 00:42:16.310 "method": "bdev_nvme_attach_controller" 00:42:16.310 }' 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:16.310 10:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:16.310 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:16.310 ... 00:42:16.310 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:16.310 ... 00:42:16.310 fio-3.35 00:42:16.310 Starting 4 threads 00:42:21.581 00:42:21.581 filename0: (groupid=0, jobs=1): err= 0: pid=1571668: Sat Dec 7 10:16:49 2024 00:42:21.581 read: IOPS=2750, BW=21.5MiB/s (22.5MB/s)(107MiB/5003msec) 00:42:21.581 slat (nsec): min=6130, max=80179, avg=14023.03, stdev=10511.90 00:42:21.581 clat (usec): min=691, max=5660, avg=2864.26, stdev=475.38 00:42:21.581 lat (usec): min=702, max=5672, avg=2878.28, stdev=476.03 00:42:21.581 clat percentiles (usec): 00:42:21.581 | 1.00th=[ 1729], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2474], 00:42:21.581 | 30.00th=[ 2671], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2966], 00:42:21.581 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3294], 95.00th=[ 3556], 00:42:21.581 | 99.00th=[ 4555], 99.50th=[ 4752], 99.90th=[ 5211], 99.95th=[ 5342], 00:42:21.581 | 99.99th=[ 5669] 00:42:21.581 bw ( KiB/s): min=20640, max=23456, per=26.16%, avg=22009.60, stdev=889.47, samples=10 00:42:21.581 iops : min= 2580, max= 2932, avg=2751.20, stdev=111.18, samples=10 00:42:21.581 lat (usec) : 750=0.01%, 1000=0.10% 00:42:21.581 lat (msec) : 2=2.62%, 4=94.79%, 10=2.48% 00:42:21.581 cpu : usr=96.76%, sys=2.88%, ctx=12, majf=0, minf=9 00:42:21.581 IO depths : 1=0.3%, 2=8.5%, 4=63.1%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:21.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.581 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.581 issued rwts: total=13759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:21.581 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:21.581 filename0: (groupid=0, jobs=1): err= 0: pid=1571669: Sat Dec 7 10:16:49 2024 00:42:21.581 read: IOPS=2610, BW=20.4MiB/s (21.4MB/s)(102MiB/5001msec) 00:42:21.581 slat (nsec): min=6016, max=84859, avg=17917.35, stdev=9701.16 00:42:21.581 clat (usec): min=809, max=5413, avg=3013.66, stdev=401.23 00:42:21.581 lat (usec): min=822, max=5422, avg=3031.58, stdev=401.80 00:42:21.581 clat percentiles (usec): 00:42:21.581 | 1.00th=[ 1942], 5.00th=[ 2376], 10.00th=[ 2606], 20.00th=[ 2802], 00:42:21.581 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3032], 00:42:21.581 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3687], 00:42:21.581 | 99.00th=[ 4293], 99.50th=[ 4621], 99.90th=[ 5145], 99.95th=[ 5276], 00:42:21.581 | 99.99th=[ 5407] 00:42:21.581 bw ( KiB/s): min=19936, max=21792, per=24.79%, avg=20856.89, stdev=616.63, samples=9 00:42:21.581 iops : min= 2492, max= 2724, avg=2607.11, stdev=77.08, samples=9 00:42:21.581 lat (usec) : 1000=0.02% 00:42:21.581 lat (msec) : 2=1.30%, 4=96.68%, 10=2.01% 00:42:21.581 cpu : usr=97.10%, sys=2.54%, ctx=40, majf=0, minf=9 00:42:21.581 IO depths : 1=0.2%, 2=5.7%, 4=64.1%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:21.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.581 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.581 issued rwts: total=13056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:21.581 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:21.581 filename1: (groupid=0, jobs=1): err= 0: pid=1571670: Sat Dec 7 10:16:49 2024 00:42:21.581 read: IOPS=2548, BW=19.9MiB/s (20.9MB/s)(99.6MiB/5001msec) 00:42:21.581 slat (nsec): min=6126, max=81070, avg=15759.06, stdev=11843.39 00:42:21.581 clat (usec): min=817, max=5722, avg=3089.36, stdev=450.22 00:42:21.581 lat (usec): min=824, max=5730, avg=3105.12, stdev=449.44 00:42:21.581 clat percentiles (usec): 00:42:21.581 | 1.00th=[ 2114], 5.00th=[ 2540], 10.00th=[ 2704], 20.00th=[ 2835], 00:42:21.581 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:42:21.581 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3589], 95.00th=[ 3949], 00:42:21.581 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5342], 99.95th=[ 5407], 00:42:21.581 | 99.99th=[ 5669] 00:42:21.581 bw ( KiB/s): min=19664, max=21264, per=24.22%, avg=20376.89, stdev=528.07, samples=9 00:42:21.581 iops : min= 2458, max= 2658, avg=2547.11, stdev=66.01, samples=9 00:42:21.581 lat (usec) : 1000=0.10% 00:42:21.581 lat (msec) : 2=0.58%, 4=94.65%, 10=4.67% 00:42:21.581 cpu : usr=97.30%, sys=2.36%, ctx=6, majf=0, minf=9 00:42:21.581 IO depths : 1=0.1%, 2=5.7%, 4=67.1%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:21.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.581 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.581 issued rwts: total=12745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:21.581 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:21.581 filename1: (groupid=0, jobs=1): err= 0: pid=1571671: Sat Dec 7 10:16:49 2024 00:42:21.581 read: IOPS=2609, BW=20.4MiB/s (21.4MB/s)(102MiB/5002msec) 00:42:21.581 slat (nsec): min=6109, max=73406, avg=15791.13, stdev=11680.26 00:42:21.581 clat (usec): min=1090, max=5445, avg=3018.60, stdev=478.61 00:42:21.581 lat (usec): min=1103, max=5452, avg=3034.39, stdev=478.46 00:42:21.581 clat percentiles (usec): 00:42:21.581 | 1.00th=[ 1958], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2737], 00:42:21.581 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:42:21.581 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3556], 95.00th=[ 3916], 00:42:21.581 | 99.00th=[ 4752], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5276], 00:42:21.581 | 99.99th=[ 5407] 00:42:21.581 bw ( KiB/s): min=20016, max=21536, per=24.81%, avg=20870.40, stdev=512.51, samples=10 00:42:21.581 iops : min= 2502, max= 2692, avg=2608.80, stdev=64.06, samples=10 00:42:21.581 lat (msec) : 2=1.26%, 4=94.19%, 10=4.54% 00:42:21.581 cpu : usr=97.34%, sys=2.30%, ctx=8, majf=0, minf=9 00:42:21.581 IO depths : 1=0.2%, 2=4.9%, 4=66.4%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:21.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.581 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.581 issued rwts: total=13051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:21.581 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:21.581 00:42:21.581 Run status group 0 (all jobs): 00:42:21.581 READ: bw=82.2MiB/s (86.1MB/s), 19.9MiB/s-21.5MiB/s (20.9MB/s-22.5MB/s), io=411MiB (431MB), run=5001-5003msec 00:42:21.581 10:16:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:21.581 10:16:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:21.581 10:16:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:21.581 10:16:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:21.581 10:16:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.582 00:42:21.582 real 0m24.295s 00:42:21.582 user 4m54.591s 00:42:21.582 sys 0m4.603s 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:21.582 10:16:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.582 ************************************ 00:42:21.582 END TEST fio_dif_rand_params 00:42:21.582 ************************************ 00:42:21.582 10:16:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:21.582 10:16:50 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:21.582 10:16:50 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:21.582 10:16:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:21.582 ************************************ 00:42:21.582 START TEST fio_dif_digest 00:42:21.582 ************************************ 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:21.582 bdev_null0 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:21.582 [2024-12-07 10:16:50.260391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:21.582 { 00:42:21.582 "params": { 00:42:21.582 "name": "Nvme$subsystem", 00:42:21.582 "trtype": "$TEST_TRANSPORT", 00:42:21.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:21.582 "adrfam": "ipv4", 00:42:21.582 "trsvcid": "$NVMF_PORT", 00:42:21.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:21.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:21.582 "hdgst": ${hdgst:-false}, 00:42:21.582 "ddgst": ${ddgst:-false} 00:42:21.582 }, 00:42:21.582 "method": "bdev_nvme_attach_controller" 00:42:21.582 } 00:42:21.582 EOF 00:42:21.582 )") 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:42:21.582 10:16:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:21.582 "params": { 00:42:21.582 "name": "Nvme0", 00:42:21.582 "trtype": "tcp", 00:42:21.582 "traddr": "10.0.0.2", 00:42:21.582 "adrfam": "ipv4", 00:42:21.582 "trsvcid": "4420", 00:42:21.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:21.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:21.582 "hdgst": true, 00:42:21.582 "ddgst": true 00:42:21.582 }, 00:42:21.582 "method": "bdev_nvme_attach_controller" 00:42:21.582 }' 00:42:21.870 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:21.870 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:21.870 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:21.870 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:21.870 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:21.870 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:21.870 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:21.870 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:21.870 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:21.870 10:16:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:22.130 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:22.130 ... 00:42:22.130 fio-3.35 00:42:22.130 Starting 3 threads 00:42:34.331 00:42:34.331 filename0: (groupid=0, jobs=1): err= 0: pid=1572737: Sat Dec 7 10:17:01 2024 00:42:34.331 read: IOPS=286, BW=35.8MiB/s (37.5MB/s)(360MiB/10045msec) 00:42:34.331 slat (nsec): min=4145, max=39672, avg=11740.59, stdev=2060.82 00:42:34.331 clat (usec): min=7743, max=54571, avg=10431.82, stdev=1744.62 00:42:34.331 lat (usec): min=7751, max=54585, avg=10443.56, stdev=1744.49 00:42:34.331 clat percentiles (usec): 00:42:34.331 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:42:34.331 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:42:34.331 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:42:34.331 | 99.00th=[12256], 99.50th=[12518], 99.90th=[54789], 99.95th=[54789], 00:42:34.331 | 99.99th=[54789] 00:42:34.331 bw ( KiB/s): min=33280, max=37888, per=34.75%, avg=36812.80, stdev=974.29, samples=20 00:42:34.331 iops : min= 260, max= 296, avg=287.60, stdev= 7.61, samples=20 00:42:34.331 lat (msec) : 10=30.03%, 20=69.83%, 50=0.03%, 100=0.10% 00:42:34.331 cpu : usr=94.19%, sys=5.52%, ctx=27, majf=0, minf=9 00:42:34.331 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:34.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:34.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:34.332 issued rwts: total=2877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:34.332 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:34.332 filename0: (groupid=0, jobs=1): err= 0: pid=1572738: Sat Dec 7 10:17:01 2024 00:42:34.332 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(336MiB/10044msec) 00:42:34.332 slat (nsec): min=6464, max=51208, avg=12016.02, stdev=2081.53 00:42:34.332 clat (usec): min=6877, max=48296, avg=11167.11, stdev=1287.90 00:42:34.332 lat (usec): min=6891, max=48309, avg=11179.13, stdev=1287.80 00:42:34.332 clat percentiles (usec): 00:42:34.332 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10552], 00:42:34.332 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:42:34.332 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:42:34.332 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14484], 99.95th=[46924], 00:42:34.332 | 99.99th=[48497] 00:42:34.332 bw ( KiB/s): min=33792, max=35584, per=32.49%, avg=34419.20, stdev=534.90, samples=20 00:42:34.332 iops : min= 264, max= 278, avg=268.90, stdev= 4.18, samples=20 00:42:34.332 lat (msec) : 10=7.47%, 20=92.46%, 50=0.07% 00:42:34.332 cpu : usr=93.30%, sys=6.39%, ctx=28, majf=0, minf=11 00:42:34.332 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:34.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:34.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:34.332 issued rwts: total=2691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:34.332 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:34.332 filename0: (groupid=0, jobs=1): err= 0: pid=1572739: Sat Dec 7 10:17:01 2024 00:42:34.332 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(343MiB/10045msec) 00:42:34.332 slat (usec): min=6, max=106, avg=12.02, stdev= 2.81 00:42:34.332 clat (usec): min=6953, max=47069, avg=10944.66, stdev=1244.51 00:42:34.332 lat (usec): min=6966, max=47082, avg=10956.68, stdev=1244.34 00:42:34.332 clat percentiles (usec): 00:42:34.332 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:42:34.332 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:42:34.332 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12256], 00:42:34.332 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13960], 99.95th=[44827], 00:42:34.332 | 99.99th=[46924] 00:42:34.332 bw ( KiB/s): min=34304, max=36352, per=33.15%, avg=35123.20, stdev=590.82, samples=20 00:42:34.332 iops : min= 268, max= 284, avg=274.40, stdev= 4.62, samples=20 00:42:34.332 lat (msec) : 10=10.96%, 20=88.97%, 50=0.07% 00:42:34.332 cpu : usr=94.05%, sys=5.65%, ctx=27, majf=0, minf=10 00:42:34.332 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:34.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:34.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:34.332 issued rwts: total=2746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:34.332 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:34.332 00:42:34.332 Run status group 0 (all jobs): 00:42:34.332 READ: bw=103MiB/s (108MB/s), 33.5MiB/s-35.8MiB/s (35.1MB/s-37.5MB/s), io=1039MiB (1090MB), run=10044-10045msec 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:34.332 00:42:34.332 real 0m11.166s 00:42:34.332 user 0m34.826s 00:42:34.332 sys 0m2.041s 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:34.332 10:17:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:34.332 ************************************ 00:42:34.332 END TEST fio_dif_digest 00:42:34.332 ************************************ 00:42:34.332 10:17:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:34.332 10:17:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:34.332 rmmod nvme_tcp 00:42:34.332 rmmod nvme_fabrics 00:42:34.332 rmmod nvme_keyring 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 1564568 ']' 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 1564568 00:42:34.332 10:17:01 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1564568 ']' 00:42:34.332 10:17:01 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1564568 00:42:34.332 10:17:01 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:42:34.332 10:17:01 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:34.332 10:17:01 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1564568 00:42:34.332 10:17:01 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:34.332 10:17:01 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:34.332 10:17:01 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1564568' 00:42:34.332 killing process with pid 1564568 00:42:34.332 10:17:01 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1564568 00:42:34.332 10:17:01 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1564568 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:42:34.332 10:17:01 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:35.701 Waiting for block devices as requested 00:42:35.701 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:42:35.702 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:35.702 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:35.958 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:35.959 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:35.959 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:35.959 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:36.216 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:36.216 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:36.216 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:36.216 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:36.475 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:36.475 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:36.475 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:36.732 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:36.732 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:36.732 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:36.732 10:17:05 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:36.732 10:17:05 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:36.732 10:17:05 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:36.732 10:17:05 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:42:36.732 10:17:05 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:36.732 10:17:05 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:42:36.990 10:17:05 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:36.990 10:17:05 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:36.990 10:17:05 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:36.990 10:17:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:36.990 10:17:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:38.892 10:17:07 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:38.892 00:42:38.892 real 1m13.024s 00:42:38.892 user 7m11.120s 00:42:38.892 sys 0m19.213s 00:42:38.892 10:17:07 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:38.892 10:17:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:38.892 ************************************ 00:42:38.892 END TEST nvmf_dif 00:42:38.892 ************************************ 00:42:38.892 10:17:07 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:38.892 10:17:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:38.892 10:17:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:38.892 10:17:07 -- common/autotest_common.sh@10 -- # set +x 00:42:38.892 ************************************ 00:42:38.892 START TEST nvmf_abort_qd_sizes 00:42:38.892 ************************************ 00:42:38.892 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:39.150 * Looking for test storage... 00:42:39.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:39.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:39.151 --rc genhtml_branch_coverage=1 00:42:39.151 --rc genhtml_function_coverage=1 00:42:39.151 --rc genhtml_legend=1 00:42:39.151 --rc geninfo_all_blocks=1 00:42:39.151 --rc geninfo_unexecuted_blocks=1 00:42:39.151 00:42:39.151 ' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:39.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:39.151 --rc genhtml_branch_coverage=1 00:42:39.151 --rc genhtml_function_coverage=1 00:42:39.151 --rc genhtml_legend=1 00:42:39.151 --rc geninfo_all_blocks=1 00:42:39.151 --rc geninfo_unexecuted_blocks=1 00:42:39.151 00:42:39.151 ' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:39.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:39.151 --rc genhtml_branch_coverage=1 00:42:39.151 --rc genhtml_function_coverage=1 00:42:39.151 --rc genhtml_legend=1 00:42:39.151 --rc geninfo_all_blocks=1 00:42:39.151 --rc geninfo_unexecuted_blocks=1 00:42:39.151 00:42:39.151 ' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:39.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:39.151 --rc genhtml_branch_coverage=1 00:42:39.151 --rc genhtml_function_coverage=1 00:42:39.151 --rc genhtml_legend=1 00:42:39.151 --rc geninfo_all_blocks=1 00:42:39.151 --rc geninfo_unexecuted_blocks=1 00:42:39.151 00:42:39.151 ' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:39.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:39.151 10:17:07 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:39.152 10:17:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:42:44.571 Found 0000:86:00.0 (0x8086 - 0x159b) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:42:44.571 Found 0000:86:00.1 (0x8086 - 0x159b) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:42:44.571 Found net devices under 0000:86:00.0: cvl_0_0 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:44.571 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:42:44.572 Found net devices under 0000:86:00.1: cvl_0_1 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:44.572 10:17:12 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:44.572 10:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:44.572 10:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:44.572 10:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:44.572 10:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:44.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:44.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:42:44.572 00:42:44.572 --- 10.0.0.2 ping statistics --- 00:42:44.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:44.572 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:42:44.572 10:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:44.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:44.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:42:44.572 00:42:44.572 --- 10.0.0.1 ping statistics --- 00:42:44.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:44.572 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:42:44.572 10:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:44.572 10:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:42:44.572 10:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:42:44.572 10:17:13 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:47.103 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:42:47.103 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:42:48.036 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:42:48.036 10:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:48.036 10:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:48.036 10:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:48.036 10:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:48.036 10:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:48.036 10:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:48.293 10:17:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:48.293 10:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:48.293 10:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:48.293 10:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:48.293 10:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=1580519 00:42:48.293 10:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:48.293 10:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 1580519 00:42:48.293 10:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1580519 ']' 00:42:48.293 10:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:48.294 10:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:48.294 10:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:48.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:48.294 10:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:48.294 10:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:48.294 [2024-12-07 10:17:16.818379] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:42:48.294 [2024-12-07 10:17:16.818426] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:48.294 [2024-12-07 10:17:16.877621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:48.294 [2024-12-07 10:17:16.920334] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:48.294 [2024-12-07 10:17:16.920376] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:48.294 [2024-12-07 10:17:16.920384] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:48.294 [2024-12-07 10:17:16.920390] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:48.294 [2024-12-07 10:17:16.920395] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:48.294 [2024-12-07 10:17:16.920444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:48.294 [2024-12-07 10:17:16.920541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:48.294 [2024-12-07 10:17:16.920631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:42:48.294 [2024-12-07 10:17:16.920632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:48.551 10:17:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:48.551 ************************************ 00:42:48.551 START TEST spdk_target_abort 00:42:48.551 ************************************ 00:42:48.551 10:17:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:42:48.551 10:17:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:48.551 10:17:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:42:48.551 10:17:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:48.551 10:17:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:51.826 spdk_targetn1 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:51.826 [2024-12-07 10:17:19.933627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:51.826 [2024-12-07 10:17:19.969864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:51.826 10:17:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:55.113 Initializing NVMe Controllers 00:42:55.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:55.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:55.113 Initialization complete. Launching workers. 00:42:55.113 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14381, failed: 0 00:42:55.113 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1367, failed to submit 13014 00:42:55.113 success 739, unsuccessful 628, failed 0 00:42:55.113 10:17:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:55.113 10:17:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:58.386 Initializing NVMe Controllers 00:42:58.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:58.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:58.386 Initialization complete. Launching workers. 00:42:58.386 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8508, failed: 0 00:42:58.386 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 7270 00:42:58.386 success 307, unsuccessful 931, failed 0 00:42:58.386 10:17:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:58.386 10:17:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:01.667 Initializing NVMe Controllers 00:43:01.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:01.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:01.668 Initialization complete. Launching workers. 00:43:01.668 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37179, failed: 0 00:43:01.668 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2849, failed to submit 34330 00:43:01.668 success 577, unsuccessful 2272, failed 0 00:43:01.668 10:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:01.668 10:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:01.668 10:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:01.668 10:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:01.668 10:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:01.668 10:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:01.668 10:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:02.603 10:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:02.603 10:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1580519 00:43:02.603 10:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1580519 ']' 00:43:02.603 10:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1580519 00:43:02.603 10:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:43:02.603 10:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:02.604 10:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1580519 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1580519' 00:43:02.604 killing process with pid 1580519 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1580519 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1580519 00:43:02.604 00:43:02.604 real 0m14.112s 00:43:02.604 user 0m53.918s 00:43:02.604 sys 0m2.357s 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:02.604 ************************************ 00:43:02.604 END TEST spdk_target_abort 00:43:02.604 ************************************ 00:43:02.604 10:17:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:02.604 10:17:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:02.604 10:17:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:02.604 10:17:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:02.604 ************************************ 00:43:02.604 START TEST kernel_target_abort 00:43:02.604 ************************************ 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:02.604 10:17:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:05.136 Waiting for block devices as requested 00:43:05.136 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:05.136 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:05.136 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:05.136 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:05.136 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:05.136 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:05.394 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:05.394 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:05.394 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:05.394 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:05.652 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:05.652 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:05.652 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:05.910 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:05.910 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:05.910 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:05.910 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:06.169 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:43:06.169 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:43:06.169 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:43:06.169 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:43:06.169 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:06.169 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:43:06.169 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:43:06.169 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:43:06.169 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:43:06.169 No valid GPT data, bailing 00:43:06.169 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:43:06.169 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:43:06.169 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:43:06.170 00:43:06.170 Discovery Log Number of Records 2, Generation counter 2 00:43:06.170 =====Discovery Log Entry 0====== 00:43:06.170 trtype: tcp 00:43:06.170 adrfam: ipv4 00:43:06.170 subtype: current discovery subsystem 00:43:06.170 treq: not specified, sq flow control disable supported 00:43:06.170 portid: 1 00:43:06.170 trsvcid: 4420 00:43:06.170 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:43:06.170 traddr: 10.0.0.1 00:43:06.170 eflags: none 00:43:06.170 sectype: none 00:43:06.170 =====Discovery Log Entry 1====== 00:43:06.170 trtype: tcp 00:43:06.170 adrfam: ipv4 00:43:06.170 subtype: nvme subsystem 00:43:06.170 treq: not specified, sq flow control disable supported 00:43:06.170 portid: 1 00:43:06.170 trsvcid: 4420 00:43:06.170 subnqn: nqn.2016-06.io.spdk:testnqn 00:43:06.170 traddr: 10.0.0.1 00:43:06.170 eflags: none 00:43:06.170 sectype: none 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:06.170 10:17:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:09.453 Initializing NVMe Controllers 00:43:09.453 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:09.453 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:09.453 Initialization complete. Launching workers. 00:43:09.453 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87989, failed: 0 00:43:09.453 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 87989, failed to submit 0 00:43:09.453 success 0, unsuccessful 87989, failed 0 00:43:09.453 10:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:09.453 10:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:12.736 Initializing NVMe Controllers 00:43:12.736 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:12.736 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:12.736 Initialization complete. Launching workers. 00:43:12.736 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 139918, failed: 0 00:43:12.736 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35162, failed to submit 104756 00:43:12.736 success 0, unsuccessful 35162, failed 0 00:43:12.736 10:17:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:12.736 10:17:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:16.022 Initializing NVMe Controllers 00:43:16.022 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:16.022 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:16.022 Initialization complete. Launching workers. 00:43:16.022 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 133328, failed: 0 00:43:16.022 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33366, failed to submit 99962 00:43:16.022 success 0, unsuccessful 33366, failed 0 00:43:16.022 10:17:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:16.022 10:17:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:16.022 10:17:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:43:16.022 10:17:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:16.022 10:17:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:16.022 10:17:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:16.022 10:17:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:16.022 10:17:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:43:16.022 10:17:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:43:16.022 10:17:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:18.552 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:18.552 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:19.117 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:19.376 00:43:19.376 real 0m16.639s 00:43:19.376 user 0m8.256s 00:43:19.376 sys 0m4.656s 00:43:19.376 10:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:19.376 10:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:19.376 ************************************ 00:43:19.376 END TEST kernel_target_abort 00:43:19.376 ************************************ 00:43:19.376 10:17:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:19.376 10:17:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:19.376 10:17:47 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:19.376 10:17:47 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:43:19.376 10:17:47 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:19.376 10:17:47 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:43:19.376 10:17:47 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:19.376 10:17:47 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:19.376 rmmod nvme_tcp 00:43:19.376 rmmod nvme_fabrics 00:43:19.376 rmmod nvme_keyring 00:43:19.376 10:17:48 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:19.376 10:17:48 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:43:19.376 10:17:48 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:43:19.376 10:17:48 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 1580519 ']' 00:43:19.376 10:17:48 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 1580519 00:43:19.376 10:17:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1580519 ']' 00:43:19.376 10:17:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1580519 00:43:19.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1580519) - No such process 00:43:19.376 10:17:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1580519 is not found' 00:43:19.376 Process with pid 1580519 is not found 00:43:19.376 10:17:48 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:43:19.376 10:17:48 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:21.903 Waiting for block devices as requested 00:43:21.903 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:21.903 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:21.903 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:21.903 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:21.903 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:21.903 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:22.160 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:22.160 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:22.160 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:22.160 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:22.419 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:22.419 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:22.419 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:22.419 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:22.677 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:22.677 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:22.677 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:22.935 10:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:22.935 10:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:22.935 10:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:43:22.935 10:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:43:22.935 10:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:22.935 10:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:43:22.935 10:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:22.935 10:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:22.936 10:17:51 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:22.936 10:17:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:22.936 10:17:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:24.837 10:17:53 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:24.837 00:43:24.837 real 0m45.917s 00:43:24.837 user 1m5.829s 00:43:24.837 sys 0m14.882s 00:43:24.837 10:17:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:24.837 10:17:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:24.837 ************************************ 00:43:24.837 END TEST nvmf_abort_qd_sizes 00:43:24.837 ************************************ 00:43:24.837 10:17:53 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:24.837 10:17:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:24.837 10:17:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:24.837 10:17:53 -- common/autotest_common.sh@10 -- # set +x 00:43:25.095 ************************************ 00:43:25.095 START TEST keyring_file 00:43:25.095 ************************************ 00:43:25.095 10:17:53 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:25.095 * Looking for test storage... 00:43:25.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:25.095 10:17:53 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:25.095 10:17:53 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:43:25.095 10:17:53 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:25.096 10:17:53 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@345 -- # : 1 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@353 -- # local d=1 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@355 -- # echo 1 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@353 -- # local d=2 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@355 -- # echo 2 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@368 -- # return 0 00:43:25.096 10:17:53 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:25.096 10:17:53 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:25.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.096 --rc genhtml_branch_coverage=1 00:43:25.096 --rc genhtml_function_coverage=1 00:43:25.096 --rc genhtml_legend=1 00:43:25.096 --rc geninfo_all_blocks=1 00:43:25.096 --rc geninfo_unexecuted_blocks=1 00:43:25.096 00:43:25.096 ' 00:43:25.096 10:17:53 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:25.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.096 --rc genhtml_branch_coverage=1 00:43:25.096 --rc genhtml_function_coverage=1 00:43:25.096 --rc genhtml_legend=1 00:43:25.096 --rc geninfo_all_blocks=1 00:43:25.096 --rc geninfo_unexecuted_blocks=1 00:43:25.096 00:43:25.096 ' 00:43:25.096 10:17:53 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:25.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.096 --rc genhtml_branch_coverage=1 00:43:25.096 --rc genhtml_function_coverage=1 00:43:25.096 --rc genhtml_legend=1 00:43:25.096 --rc geninfo_all_blocks=1 00:43:25.096 --rc geninfo_unexecuted_blocks=1 00:43:25.096 00:43:25.096 ' 00:43:25.096 10:17:53 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:25.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.096 --rc genhtml_branch_coverage=1 00:43:25.096 --rc genhtml_function_coverage=1 00:43:25.096 --rc genhtml_legend=1 00:43:25.096 --rc geninfo_all_blocks=1 00:43:25.096 --rc geninfo_unexecuted_blocks=1 00:43:25.096 00:43:25.096 ' 00:43:25.096 10:17:53 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:25.096 10:17:53 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:25.096 10:17:53 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.096 10:17:53 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.096 10:17:53 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.096 10:17:53 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:25.096 10:17:53 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@51 -- # : 0 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:25.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:25.096 10:17:53 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:25.096 10:17:53 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:25.096 10:17:53 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:25.096 10:17:53 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:25.096 10:17:53 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:25.096 10:17:53 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xApY3mgKXQ 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:43:25.096 10:17:53 keyring_file -- nvmf/common.sh@729 -- # python - 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xApY3mgKXQ 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xApY3mgKXQ 00:43:25.096 10:17:53 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.xApY3mgKXQ 00:43:25.096 10:17:53 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:25.096 10:17:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:25.354 10:17:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:25.354 10:17:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mTWtnYCVJD 00:43:25.354 10:17:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:25.354 10:17:53 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:25.354 10:17:53 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:43:25.354 10:17:53 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:25.354 10:17:53 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:43:25.354 10:17:53 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:43:25.354 10:17:53 keyring_file -- nvmf/common.sh@729 -- # python - 00:43:25.354 10:17:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mTWtnYCVJD 00:43:25.354 10:17:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mTWtnYCVJD 00:43:25.354 10:17:53 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.mTWtnYCVJD 00:43:25.354 10:17:53 keyring_file -- keyring/file.sh@30 -- # tgtpid=1588921 00:43:25.354 10:17:53 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:25.354 10:17:53 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1588921 00:43:25.354 10:17:53 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1588921 ']' 00:43:25.354 10:17:53 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:25.354 10:17:53 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:25.354 10:17:53 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:25.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:25.354 10:17:53 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:25.354 10:17:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:25.354 [2024-12-07 10:17:53.910364] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:25.354 [2024-12-07 10:17:53.910415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588921 ] 00:43:25.354 [2024-12-07 10:17:53.964445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:25.354 [2024-12-07 10:17:54.005274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:43:25.611 10:17:54 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:25.611 [2024-12-07 10:17:54.200270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:25.611 null0 00:43:25.611 [2024-12-07 10:17:54.232330] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:25.611 [2024-12-07 10:17:54.232682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:25.611 10:17:54 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:25.611 [2024-12-07 10:17:54.260392] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:25.611 request: 00:43:25.611 { 00:43:25.611 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:25.611 "secure_channel": false, 00:43:25.611 "listen_address": { 00:43:25.611 "trtype": "tcp", 00:43:25.611 "traddr": "127.0.0.1", 00:43:25.611 "trsvcid": "4420" 00:43:25.611 }, 00:43:25.611 "method": "nvmf_subsystem_add_listener", 00:43:25.611 "req_id": 1 00:43:25.611 } 00:43:25.611 Got JSON-RPC error response 00:43:25.611 response: 00:43:25.611 { 00:43:25.611 "code": -32602, 00:43:25.611 "message": "Invalid parameters" 00:43:25.611 } 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:25.611 10:17:54 keyring_file -- keyring/file.sh@47 -- # bperfpid=1589059 00:43:25.611 10:17:54 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1589059 /var/tmp/bperf.sock 00:43:25.611 10:17:54 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:25.611 10:17:54 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1589059 ']' 00:43:25.612 10:17:54 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:25.612 10:17:54 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:25.612 10:17:54 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:25.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:25.612 10:17:54 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:25.612 10:17:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:25.612 [2024-12-07 10:17:54.315330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:25.612 [2024-12-07 10:17:54.315374] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589059 ] 00:43:25.869 [2024-12-07 10:17:54.369461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:25.869 [2024-12-07 10:17:54.411018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:25.869 10:17:54 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:25.869 10:17:54 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:43:25.869 10:17:54 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xApY3mgKXQ 00:43:25.869 10:17:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xApY3mgKXQ 00:43:26.126 10:17:54 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mTWtnYCVJD 00:43:26.126 10:17:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mTWtnYCVJD 00:43:26.383 10:17:54 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:43:26.383 10:17:54 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:26.383 10:17:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:26.383 10:17:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:26.383 10:17:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:26.383 10:17:55 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.xApY3mgKXQ == \/\t\m\p\/\t\m\p\.\x\A\p\Y\3\m\g\K\X\Q ]] 00:43:26.383 10:17:55 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:43:26.383 10:17:55 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:43:26.383 10:17:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:26.383 10:17:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:26.383 10:17:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:26.648 10:17:55 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.mTWtnYCVJD == \/\t\m\p\/\t\m\p\.\m\T\W\t\n\Y\C\V\J\D ]] 00:43:26.648 10:17:55 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:43:26.648 10:17:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:26.648 10:17:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:26.648 10:17:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:26.648 10:17:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:26.648 10:17:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:26.909 10:17:55 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:26.909 10:17:55 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:43:26.909 10:17:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:26.909 10:17:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:26.909 10:17:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:26.909 10:17:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:26.909 10:17:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:27.166 10:17:55 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:43:27.166 10:17:55 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:27.166 10:17:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:27.166 [2024-12-07 10:17:55.851822] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:27.422 nvme0n1 00:43:27.422 10:17:55 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:27.422 10:17:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:27.422 10:17:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:27.422 10:17:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:27.422 10:17:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:27.422 10:17:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:27.422 10:17:56 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:43:27.422 10:17:56 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:43:27.422 10:17:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:27.692 10:17:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:27.692 10:17:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:27.692 10:17:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:27.692 10:17:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:27.692 10:17:56 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:43:27.692 10:17:56 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:27.949 Running I/O for 1 seconds... 00:43:28.879 16478.00 IOPS, 64.37 MiB/s 00:43:28.879 Latency(us) 00:43:28.879 [2024-12-07T09:17:57.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:28.879 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:28.879 nvme0n1 : 1.01 16522.09 64.54 0.00 0.00 7730.51 3732.70 15044.79 00:43:28.879 [2024-12-07T09:17:57.605Z] =================================================================================================================== 00:43:28.879 [2024-12-07T09:17:57.605Z] Total : 16522.09 64.54 0.00 0.00 7730.51 3732.70 15044.79 00:43:28.879 { 00:43:28.879 "results": [ 00:43:28.879 { 00:43:28.879 "job": "nvme0n1", 00:43:28.879 "core_mask": "0x2", 00:43:28.879 "workload": "randrw", 00:43:28.879 "percentage": 50, 00:43:28.879 "status": "finished", 00:43:28.879 "queue_depth": 128, 00:43:28.879 "io_size": 4096, 00:43:28.879 "runtime": 1.005139, 00:43:28.879 "iops": 16522.092964256684, 00:43:28.879 "mibps": 64.53942564162767, 00:43:28.879 "io_failed": 0, 00:43:28.879 "io_timeout": 0, 00:43:28.879 "avg_latency_us": 7730.509962640164, 00:43:28.879 "min_latency_us": 3732.702608695652, 00:43:28.879 "max_latency_us": 15044.786086956521 00:43:28.879 } 00:43:28.879 ], 00:43:28.879 "core_count": 1 00:43:28.879 } 00:43:28.879 10:17:57 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:28.879 10:17:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:29.137 10:17:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:43:29.137 10:17:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:29.137 10:17:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:29.137 10:17:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:29.137 10:17:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:29.137 10:17:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:29.137 10:17:57 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:29.137 10:17:57 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:43:29.137 10:17:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:29.137 10:17:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:29.137 10:17:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:29.137 10:17:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:29.137 10:17:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:29.394 10:17:58 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:43:29.395 10:17:58 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:29.395 10:17:58 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:29.395 10:17:58 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:29.395 10:17:58 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:29.395 10:17:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:29.395 10:17:58 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:29.395 10:17:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:29.395 10:17:58 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:29.395 10:17:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:29.652 [2024-12-07 10:17:58.226380] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:29.652 [2024-12-07 10:17:58.227100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12255b0 (107): Transport endpoint is not connected 00:43:29.652 [2024-12-07 10:17:58.228095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12255b0 (9): Bad file descriptor 00:43:29.652 [2024-12-07 10:17:58.229096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:29.652 [2024-12-07 10:17:58.229107] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:29.652 [2024-12-07 10:17:58.229114] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:29.652 [2024-12-07 10:17:58.229123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:29.652 request: 00:43:29.652 { 00:43:29.652 "name": "nvme0", 00:43:29.652 "trtype": "tcp", 00:43:29.652 "traddr": "127.0.0.1", 00:43:29.652 "adrfam": "ipv4", 00:43:29.652 "trsvcid": "4420", 00:43:29.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:29.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:29.652 "prchk_reftag": false, 00:43:29.652 "prchk_guard": false, 00:43:29.652 "hdgst": false, 00:43:29.652 "ddgst": false, 00:43:29.652 "psk": "key1", 00:43:29.652 "allow_unrecognized_csi": false, 00:43:29.652 "method": "bdev_nvme_attach_controller", 00:43:29.652 "req_id": 1 00:43:29.652 } 00:43:29.652 Got JSON-RPC error response 00:43:29.652 response: 00:43:29.652 { 00:43:29.652 "code": -5, 00:43:29.652 "message": "Input/output error" 00:43:29.652 } 00:43:29.652 10:17:58 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:29.652 10:17:58 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:29.652 10:17:58 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:29.652 10:17:58 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:29.652 10:17:58 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:43:29.652 10:17:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:29.652 10:17:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:29.652 10:17:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:29.652 10:17:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:29.652 10:17:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:29.911 10:17:58 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:29.911 10:17:58 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:43:29.911 10:17:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:29.911 10:17:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:29.911 10:17:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:29.911 10:17:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:29.911 10:17:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:30.168 10:17:58 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:43:30.168 10:17:58 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:43:30.168 10:17:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:30.168 10:17:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:43:30.168 10:17:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:30.426 10:17:59 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:43:30.426 10:17:59 keyring_file -- keyring/file.sh@78 -- # jq length 00:43:30.426 10:17:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:30.683 10:17:59 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:43:30.683 10:17:59 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.xApY3mgKXQ 00:43:30.683 10:17:59 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.xApY3mgKXQ 00:43:30.683 10:17:59 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:30.683 10:17:59 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.xApY3mgKXQ 00:43:30.683 10:17:59 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:30.683 10:17:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:30.683 10:17:59 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:30.683 10:17:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:30.683 10:17:59 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xApY3mgKXQ 00:43:30.683 10:17:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xApY3mgKXQ 00:43:30.683 [2024-12-07 10:17:59.386163] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xApY3mgKXQ': 0100660 00:43:30.683 [2024-12-07 10:17:59.386188] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:30.683 request: 00:43:30.683 { 00:43:30.683 "name": "key0", 00:43:30.683 "path": "/tmp/tmp.xApY3mgKXQ", 00:43:30.683 "method": "keyring_file_add_key", 00:43:30.683 "req_id": 1 00:43:30.683 } 00:43:30.683 Got JSON-RPC error response 00:43:30.683 response: 00:43:30.683 { 00:43:30.683 "code": -1, 00:43:30.683 "message": "Operation not permitted" 00:43:30.683 } 00:43:30.683 10:17:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:30.683 10:17:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:30.683 10:17:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:30.683 10:17:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:30.683 10:17:59 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.xApY3mgKXQ 00:43:30.940 10:17:59 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xApY3mgKXQ 00:43:30.941 10:17:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xApY3mgKXQ 00:43:30.941 10:17:59 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.xApY3mgKXQ 00:43:30.941 10:17:59 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:43:30.941 10:17:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:30.941 10:17:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:30.941 10:17:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:30.941 10:17:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:30.941 10:17:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:31.198 10:17:59 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:43:31.198 10:17:59 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:31.198 10:17:59 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:31.198 10:17:59 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:31.198 10:17:59 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:31.198 10:17:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:31.198 10:17:59 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:31.198 10:17:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:31.198 10:17:59 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:31.198 10:17:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:31.455 [2024-12-07 10:17:59.967705] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.xApY3mgKXQ': No such file or directory 00:43:31.455 [2024-12-07 10:17:59.967728] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:31.455 [2024-12-07 10:17:59.967743] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:31.456 [2024-12-07 10:17:59.967755] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:43:31.456 [2024-12-07 10:17:59.967762] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:31.456 [2024-12-07 10:17:59.967768] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:31.456 request: 00:43:31.456 { 00:43:31.456 "name": "nvme0", 00:43:31.456 "trtype": "tcp", 00:43:31.456 "traddr": "127.0.0.1", 00:43:31.456 "adrfam": "ipv4", 00:43:31.456 "trsvcid": "4420", 00:43:31.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:31.456 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:31.456 "prchk_reftag": false, 00:43:31.456 "prchk_guard": false, 00:43:31.456 "hdgst": false, 00:43:31.456 "ddgst": false, 00:43:31.456 "psk": "key0", 00:43:31.456 "allow_unrecognized_csi": false, 00:43:31.456 "method": "bdev_nvme_attach_controller", 00:43:31.456 "req_id": 1 00:43:31.456 } 00:43:31.456 Got JSON-RPC error response 00:43:31.456 response: 00:43:31.456 { 00:43:31.456 "code": -19, 00:43:31.456 "message": "No such device" 00:43:31.456 } 00:43:31.456 10:17:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:31.456 10:17:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:31.456 10:17:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:31.456 10:17:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:31.456 10:17:59 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:43:31.456 10:17:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:31.713 10:18:00 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:31.713 10:18:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:31.713 10:18:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:31.713 10:18:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:31.713 10:18:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:31.713 10:18:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:31.713 10:18:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3ZjYyYI3lH 00:43:31.713 10:18:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:31.713 10:18:00 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:31.713 10:18:00 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:43:31.713 10:18:00 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:31.713 10:18:00 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:43:31.713 10:18:00 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:43:31.713 10:18:00 keyring_file -- nvmf/common.sh@729 -- # python - 00:43:31.713 10:18:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3ZjYyYI3lH 00:43:31.713 10:18:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3ZjYyYI3lH 00:43:31.713 10:18:00 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.3ZjYyYI3lH 00:43:31.713 10:18:00 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3ZjYyYI3lH 00:43:31.713 10:18:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3ZjYyYI3lH 00:43:31.713 10:18:00 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:31.713 10:18:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:31.970 nvme0n1 00:43:31.970 10:18:00 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:43:32.228 10:18:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:32.228 10:18:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:32.228 10:18:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:32.228 10:18:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:32.228 10:18:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:32.228 10:18:00 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:43:32.228 10:18:00 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:43:32.228 10:18:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:32.485 10:18:01 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:43:32.485 10:18:01 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:43:32.485 10:18:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:32.485 10:18:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:32.486 10:18:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:32.742 10:18:01 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:43:32.742 10:18:01 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:43:32.742 10:18:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:32.742 10:18:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:32.742 10:18:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:32.742 10:18:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:32.742 10:18:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:32.998 10:18:01 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:43:32.998 10:18:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:32.998 10:18:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:32.998 10:18:01 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:43:32.998 10:18:01 keyring_file -- keyring/file.sh@105 -- # jq length 00:43:32.998 10:18:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:33.255 10:18:01 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:33.255 10:18:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3ZjYyYI3lH 00:43:33.255 10:18:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3ZjYyYI3lH 00:43:33.512 10:18:02 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mTWtnYCVJD 00:43:33.512 10:18:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mTWtnYCVJD 00:43:33.769 10:18:02 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:33.769 10:18:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:34.026 nvme0n1 00:43:34.026 10:18:02 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:34.026 10:18:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:34.284 10:18:02 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:34.284 "subsystems": [ 00:43:34.284 { 00:43:34.284 "subsystem": "keyring", 00:43:34.284 "config": [ 00:43:34.284 { 00:43:34.284 "method": "keyring_file_add_key", 00:43:34.284 "params": { 00:43:34.284 "name": "key0", 00:43:34.284 "path": "/tmp/tmp.3ZjYyYI3lH" 00:43:34.284 } 00:43:34.284 }, 00:43:34.284 { 00:43:34.284 "method": "keyring_file_add_key", 00:43:34.284 "params": { 00:43:34.284 "name": "key1", 00:43:34.284 "path": "/tmp/tmp.mTWtnYCVJD" 00:43:34.284 } 00:43:34.284 } 00:43:34.284 ] 00:43:34.284 }, 00:43:34.284 { 00:43:34.284 "subsystem": "iobuf", 00:43:34.284 "config": [ 00:43:34.284 { 00:43:34.284 "method": "iobuf_set_options", 00:43:34.284 "params": { 00:43:34.284 "small_pool_count": 8192, 00:43:34.284 "large_pool_count": 1024, 00:43:34.284 "small_bufsize": 8192, 00:43:34.284 "large_bufsize": 135168 00:43:34.284 } 00:43:34.284 } 00:43:34.284 ] 00:43:34.284 }, 00:43:34.284 { 00:43:34.284 "subsystem": "sock", 00:43:34.284 "config": [ 00:43:34.284 { 00:43:34.284 "method": "sock_set_default_impl", 00:43:34.284 "params": { 00:43:34.284 "impl_name": "posix" 00:43:34.284 } 00:43:34.284 }, 00:43:34.284 { 00:43:34.284 "method": "sock_impl_set_options", 00:43:34.284 "params": { 00:43:34.284 "impl_name": "ssl", 00:43:34.284 "recv_buf_size": 4096, 00:43:34.284 "send_buf_size": 4096, 00:43:34.284 "enable_recv_pipe": true, 00:43:34.284 "enable_quickack": false, 00:43:34.284 "enable_placement_id": 0, 00:43:34.284 "enable_zerocopy_send_server": true, 00:43:34.284 "enable_zerocopy_send_client": false, 00:43:34.284 "zerocopy_threshold": 0, 00:43:34.284 "tls_version": 0, 00:43:34.284 "enable_ktls": false 00:43:34.284 } 00:43:34.284 }, 00:43:34.284 { 00:43:34.284 "method": "sock_impl_set_options", 00:43:34.284 "params": { 00:43:34.284 "impl_name": "posix", 00:43:34.284 "recv_buf_size": 2097152, 00:43:34.284 "send_buf_size": 2097152, 00:43:34.284 "enable_recv_pipe": true, 00:43:34.284 "enable_quickack": false, 00:43:34.284 "enable_placement_id": 0, 00:43:34.284 "enable_zerocopy_send_server": true, 00:43:34.284 "enable_zerocopy_send_client": false, 00:43:34.284 "zerocopy_threshold": 0, 00:43:34.284 "tls_version": 0, 00:43:34.284 "enable_ktls": false 00:43:34.284 } 00:43:34.284 } 00:43:34.284 ] 00:43:34.284 }, 00:43:34.284 { 00:43:34.284 "subsystem": "vmd", 00:43:34.284 "config": [] 00:43:34.284 }, 00:43:34.284 { 00:43:34.284 "subsystem": "accel", 00:43:34.284 "config": [ 00:43:34.284 { 00:43:34.284 "method": "accel_set_options", 00:43:34.284 "params": { 00:43:34.284 "small_cache_size": 128, 00:43:34.284 "large_cache_size": 16, 00:43:34.284 "task_count": 2048, 00:43:34.284 "sequence_count": 2048, 00:43:34.284 "buf_count": 2048 00:43:34.284 } 00:43:34.284 } 00:43:34.284 ] 00:43:34.284 }, 00:43:34.284 { 00:43:34.284 "subsystem": "bdev", 00:43:34.284 "config": [ 00:43:34.284 { 00:43:34.284 "method": "bdev_set_options", 00:43:34.284 "params": { 00:43:34.284 "bdev_io_pool_size": 65535, 00:43:34.284 "bdev_io_cache_size": 256, 00:43:34.284 "bdev_auto_examine": true, 00:43:34.284 "iobuf_small_cache_size": 128, 00:43:34.284 "iobuf_large_cache_size": 16 00:43:34.284 } 00:43:34.284 }, 00:43:34.284 { 00:43:34.284 "method": "bdev_raid_set_options", 00:43:34.284 "params": { 00:43:34.284 "process_window_size_kb": 1024, 00:43:34.284 "process_max_bandwidth_mb_sec": 0 00:43:34.284 } 00:43:34.284 }, 00:43:34.284 { 00:43:34.284 "method": "bdev_iscsi_set_options", 00:43:34.284 "params": { 00:43:34.284 "timeout_sec": 30 00:43:34.284 } 00:43:34.284 }, 00:43:34.284 { 00:43:34.284 "method": "bdev_nvme_set_options", 00:43:34.284 "params": { 00:43:34.284 "action_on_timeout": "none", 00:43:34.284 "timeout_us": 0, 00:43:34.284 "timeout_admin_us": 0, 00:43:34.284 "keep_alive_timeout_ms": 10000, 00:43:34.284 "arbitration_burst": 0, 00:43:34.284 "low_priority_weight": 0, 00:43:34.284 "medium_priority_weight": 0, 00:43:34.284 "high_priority_weight": 0, 00:43:34.284 "nvme_adminq_poll_period_us": 10000, 00:43:34.284 "nvme_ioq_poll_period_us": 0, 00:43:34.284 "io_queue_requests": 512, 00:43:34.284 "delay_cmd_submit": true, 00:43:34.284 "transport_retry_count": 4, 00:43:34.284 "bdev_retry_count": 3, 00:43:34.284 "transport_ack_timeout": 0, 00:43:34.284 "ctrlr_loss_timeout_sec": 0, 00:43:34.284 "reconnect_delay_sec": 0, 00:43:34.284 "fast_io_fail_timeout_sec": 0, 00:43:34.284 "disable_auto_failback": false, 00:43:34.284 "generate_uuids": false, 00:43:34.284 "transport_tos": 0, 00:43:34.284 "nvme_error_stat": false, 00:43:34.284 "rdma_srq_size": 0, 00:43:34.284 "io_path_stat": false, 00:43:34.284 "allow_accel_sequence": false, 00:43:34.284 "rdma_max_cq_size": 0, 00:43:34.284 "rdma_cm_event_timeout_ms": 0, 00:43:34.284 "dhchap_digests": [ 00:43:34.284 "sha256", 00:43:34.284 "sha384", 00:43:34.284 "sha512" 00:43:34.284 ], 00:43:34.284 "dhchap_dhgroups": [ 00:43:34.285 "null", 00:43:34.285 "ffdhe2048", 00:43:34.285 "ffdhe3072", 00:43:34.285 "ffdhe4096", 00:43:34.285 "ffdhe6144", 00:43:34.285 "ffdhe8192" 00:43:34.285 ] 00:43:34.285 } 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "method": "bdev_nvme_attach_controller", 00:43:34.285 "params": { 00:43:34.285 "name": "nvme0", 00:43:34.285 "trtype": "TCP", 00:43:34.285 "adrfam": "IPv4", 00:43:34.285 "traddr": "127.0.0.1", 00:43:34.285 "trsvcid": "4420", 00:43:34.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:34.285 "prchk_reftag": false, 00:43:34.285 "prchk_guard": false, 00:43:34.285 "ctrlr_loss_timeout_sec": 0, 00:43:34.285 "reconnect_delay_sec": 0, 00:43:34.285 "fast_io_fail_timeout_sec": 0, 00:43:34.285 "psk": "key0", 00:43:34.285 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:34.285 "hdgst": false, 00:43:34.285 "ddgst": false 00:43:34.285 } 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "method": "bdev_nvme_set_hotplug", 00:43:34.285 "params": { 00:43:34.285 "period_us": 100000, 00:43:34.285 "enable": false 00:43:34.285 } 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "method": "bdev_wait_for_examine" 00:43:34.285 } 00:43:34.285 ] 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "subsystem": "nbd", 00:43:34.285 "config": [] 00:43:34.285 } 00:43:34.285 ] 00:43:34.285 }' 00:43:34.285 10:18:02 keyring_file -- keyring/file.sh@115 -- # killprocess 1589059 00:43:34.285 10:18:02 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1589059 ']' 00:43:34.285 10:18:02 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1589059 00:43:34.285 10:18:02 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:34.285 10:18:02 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:34.285 10:18:02 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1589059 00:43:34.285 10:18:02 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:34.285 10:18:02 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:34.285 10:18:02 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1589059' 00:43:34.285 killing process with pid 1589059 00:43:34.285 10:18:02 keyring_file -- common/autotest_common.sh@969 -- # kill 1589059 00:43:34.285 Received shutdown signal, test time was about 1.000000 seconds 00:43:34.285 00:43:34.285 Latency(us) 00:43:34.285 [2024-12-07T09:18:03.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:34.285 [2024-12-07T09:18:03.011Z] =================================================================================================================== 00:43:34.285 [2024-12-07T09:18:03.011Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:34.285 10:18:02 keyring_file -- common/autotest_common.sh@974 -- # wait 1589059 00:43:34.285 10:18:02 keyring_file -- keyring/file.sh@118 -- # bperfpid=1590693 00:43:34.285 10:18:03 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1590693 /var/tmp/bperf.sock 00:43:34.285 10:18:03 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1590693 ']' 00:43:34.285 10:18:03 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:34.285 10:18:03 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:34.285 10:18:03 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:34.285 10:18:03 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:34.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:34.285 10:18:03 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:34.285 "subsystems": [ 00:43:34.285 { 00:43:34.285 "subsystem": "keyring", 00:43:34.285 "config": [ 00:43:34.285 { 00:43:34.285 "method": "keyring_file_add_key", 00:43:34.285 "params": { 00:43:34.285 "name": "key0", 00:43:34.285 "path": "/tmp/tmp.3ZjYyYI3lH" 00:43:34.285 } 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "method": "keyring_file_add_key", 00:43:34.285 "params": { 00:43:34.285 "name": "key1", 00:43:34.285 "path": "/tmp/tmp.mTWtnYCVJD" 00:43:34.285 } 00:43:34.285 } 00:43:34.285 ] 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "subsystem": "iobuf", 00:43:34.285 "config": [ 00:43:34.285 { 00:43:34.285 "method": "iobuf_set_options", 00:43:34.285 "params": { 00:43:34.285 "small_pool_count": 8192, 00:43:34.285 "large_pool_count": 1024, 00:43:34.285 "small_bufsize": 8192, 00:43:34.285 "large_bufsize": 135168 00:43:34.285 } 00:43:34.285 } 00:43:34.285 ] 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "subsystem": "sock", 00:43:34.285 "config": [ 00:43:34.285 { 00:43:34.285 "method": "sock_set_default_impl", 00:43:34.285 "params": { 00:43:34.285 "impl_name": "posix" 00:43:34.285 } 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "method": "sock_impl_set_options", 00:43:34.285 "params": { 00:43:34.285 "impl_name": "ssl", 00:43:34.285 "recv_buf_size": 4096, 00:43:34.285 "send_buf_size": 4096, 00:43:34.285 "enable_recv_pipe": true, 00:43:34.285 "enable_quickack": false, 00:43:34.285 "enable_placement_id": 0, 00:43:34.285 "enable_zerocopy_send_server": true, 00:43:34.285 "enable_zerocopy_send_client": false, 00:43:34.285 "zerocopy_threshold": 0, 00:43:34.285 "tls_version": 0, 00:43:34.285 "enable_ktls": false 00:43:34.285 } 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "method": "sock_impl_set_options", 00:43:34.285 "params": { 00:43:34.285 "impl_name": "posix", 00:43:34.285 "recv_buf_size": 2097152, 00:43:34.285 "send_buf_size": 2097152, 00:43:34.285 "enable_recv_pipe": true, 00:43:34.285 "enable_quickack": false, 00:43:34.285 "enable_placement_id": 0, 00:43:34.285 "enable_zerocopy_send_server": true, 00:43:34.285 "enable_zerocopy_send_client": false, 00:43:34.285 "zerocopy_threshold": 0, 00:43:34.285 "tls_version": 0, 00:43:34.285 "enable_ktls": false 00:43:34.285 } 00:43:34.285 } 00:43:34.285 ] 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "subsystem": "vmd", 00:43:34.285 "config": [] 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "subsystem": "accel", 00:43:34.285 "config": [ 00:43:34.285 { 00:43:34.285 "method": "accel_set_options", 00:43:34.285 "params": { 00:43:34.285 "small_cache_size": 128, 00:43:34.285 "large_cache_size": 16, 00:43:34.285 "task_count": 2048, 00:43:34.285 "sequence_count": 2048, 00:43:34.285 "buf_count": 2048 00:43:34.285 } 00:43:34.285 } 00:43:34.285 ] 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "subsystem": "bdev", 00:43:34.285 "config": [ 00:43:34.285 { 00:43:34.285 "method": "bdev_set_options", 00:43:34.285 "params": { 00:43:34.285 "bdev_io_pool_size": 65535, 00:43:34.285 "bdev_io_cache_size": 256, 00:43:34.285 "bdev_auto_examine": true, 00:43:34.285 "iobuf_small_cache_size": 128, 00:43:34.285 "iobuf_large_cache_size": 16 00:43:34.285 } 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "method": "bdev_raid_set_options", 00:43:34.285 "params": { 00:43:34.285 "process_window_size_kb": 1024, 00:43:34.285 "process_max_bandwidth_mb_sec": 0 00:43:34.285 } 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "method": "bdev_iscsi_set_options", 00:43:34.285 "params": { 00:43:34.285 "timeout_sec": 30 00:43:34.285 } 00:43:34.285 }, 00:43:34.285 { 00:43:34.285 "method": "bdev_nvme_set_options", 00:43:34.285 "params": { 00:43:34.285 "action_on_timeout": "none", 00:43:34.285 "timeout_us": 0, 00:43:34.285 "timeout_admin_us": 0, 00:43:34.285 "keep_alive_timeout_ms": 10000, 00:43:34.285 "arbitration_burst": 0, 00:43:34.285 "low_priority_weight": 0, 00:43:34.285 "medium_priority_weight": 0, 00:43:34.285 "high_priority_weight": 0, 00:43:34.285 "nvme_adminq_poll_period_us": 10000, 00:43:34.285 "nvme_ioq_poll_period_us": 0, 00:43:34.285 "io_queue_requests": 512, 00:43:34.285 "delay_cmd_submit": true, 00:43:34.285 "transport_retry_count": 4, 00:43:34.285 "bdev_retry_count": 3, 00:43:34.285 "transport_ack_timeout": 0, 00:43:34.285 "ctrlr_loss_timeout_sec": 0, 00:43:34.285 "reconnect_delay_sec": 0, 00:43:34.285 "fast_io_fail_timeout_sec": 0, 00:43:34.285 "disable_auto_failback": false, 00:43:34.286 "generate_uuids": false, 00:43:34.286 "transport_tos": 0, 00:43:34.286 "nvme_error_stat": false, 00:43:34.286 "rdma_srq_size": 0, 00:43:34.286 "io_path_stat": false, 00:43:34.286 "allow_accel_sequence": false, 00:43:34.286 "rdma_max_cq_size": 0, 00:43:34.286 "rdma_cm_event_timeout_ms": 0, 00:43:34.286 "dhchap_digests": [ 00:43:34.286 "sha256", 00:43:34.286 "sha384", 00:43:34.286 "sha512" 00:43:34.286 ], 00:43:34.286 "dhchap_dhgroups": [ 00:43:34.286 "null", 00:43:34.286 "ffdhe2048", 00:43:34.286 "ffdhe3072", 00:43:34.286 "ffdhe4096", 00:43:34.286 "ffdhe6144", 00:43:34.286 "ffdhe8192" 00:43:34.286 ] 00:43:34.286 } 00:43:34.286 }, 00:43:34.286 { 00:43:34.286 "method": "bdev_nvme_attach_controller", 00:43:34.286 "params": { 00:43:34.286 "name": "nvme0", 00:43:34.286 "trtype": "TCP", 00:43:34.286 "adrfam": "IPv4", 00:43:34.286 "traddr": "127.0.0.1", 00:43:34.286 "trsvcid": "4420", 00:43:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:34.286 "prchk_reftag": false, 00:43:34.286 "prchk_guard": false, 00:43:34.286 "ctrlr_loss_timeout_sec": 0, 00:43:34.286 "reconnect_delay_sec": 0, 00:43:34.286 "fast_io_fail_timeout_sec": 0, 00:43:34.286 "psk": "key0", 00:43:34.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:34.286 "hdgst": false, 00:43:34.286 "ddgst": false 00:43:34.286 } 00:43:34.286 }, 00:43:34.286 { 00:43:34.286 "method": "bdev_nvme_set_hotplug", 00:43:34.286 "params": { 00:43:34.286 "period_us": 100000, 00:43:34.286 "enable": false 00:43:34.286 } 00:43:34.286 }, 00:43:34.286 { 00:43:34.286 "method": "bdev_wait_for_examine" 00:43:34.286 } 00:43:34.286 ] 00:43:34.286 }, 00:43:34.286 { 00:43:34.286 "subsystem": "nbd", 00:43:34.286 "config": [] 00:43:34.286 } 00:43:34.286 ] 00:43:34.286 }' 00:43:34.286 10:18:03 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:34.286 10:18:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:34.543 [2024-12-07 10:18:03.044255] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:34.543 [2024-12-07 10:18:03.044303] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590693 ] 00:43:34.543 [2024-12-07 10:18:03.098599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:34.543 [2024-12-07 10:18:03.140449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:34.800 [2024-12-07 10:18:03.296719] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:35.364 10:18:03 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:35.364 10:18:03 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:43:35.364 10:18:03 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:35.364 10:18:03 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:35.364 10:18:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:35.623 10:18:04 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:35.623 10:18:04 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:35.623 10:18:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:35.623 10:18:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:35.623 10:18:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:35.623 10:18:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:35.623 10:18:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:35.623 10:18:04 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:35.623 10:18:04 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:35.623 10:18:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:35.623 10:18:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:35.623 10:18:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:35.623 10:18:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:35.623 10:18:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:35.880 10:18:04 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:35.880 10:18:04 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:35.880 10:18:04 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:35.880 10:18:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:36.137 10:18:04 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:36.137 10:18:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:36.137 10:18:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.3ZjYyYI3lH /tmp/tmp.mTWtnYCVJD 00:43:36.137 10:18:04 keyring_file -- keyring/file.sh@20 -- # killprocess 1590693 00:43:36.137 10:18:04 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1590693 ']' 00:43:36.137 10:18:04 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1590693 00:43:36.137 10:18:04 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:36.137 10:18:04 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:36.137 10:18:04 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1590693 00:43:36.138 10:18:04 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:36.138 10:18:04 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:36.138 10:18:04 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1590693' 00:43:36.138 killing process with pid 1590693 00:43:36.138 10:18:04 keyring_file -- common/autotest_common.sh@969 -- # kill 1590693 00:43:36.138 Received shutdown signal, test time was about 1.000000 seconds 00:43:36.138 00:43:36.138 Latency(us) 00:43:36.138 [2024-12-07T09:18:04.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:36.138 [2024-12-07T09:18:04.864Z] =================================================================================================================== 00:43:36.138 [2024-12-07T09:18:04.864Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:36.138 10:18:04 keyring_file -- common/autotest_common.sh@974 -- # wait 1590693 00:43:36.395 10:18:04 keyring_file -- keyring/file.sh@21 -- # killprocess 1588921 00:43:36.395 10:18:04 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1588921 ']' 00:43:36.395 10:18:04 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1588921 00:43:36.395 10:18:04 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:36.395 10:18:04 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:36.395 10:18:04 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1588921 00:43:36.395 10:18:04 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:36.395 10:18:04 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:36.395 10:18:04 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1588921' 00:43:36.395 killing process with pid 1588921 00:43:36.395 10:18:04 keyring_file -- common/autotest_common.sh@969 -- # kill 1588921 00:43:36.395 10:18:04 keyring_file -- common/autotest_common.sh@974 -- # wait 1588921 00:43:36.652 00:43:36.652 real 0m11.748s 00:43:36.652 user 0m28.989s 00:43:36.652 sys 0m2.754s 00:43:36.652 10:18:05 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:36.652 10:18:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:36.652 ************************************ 00:43:36.652 END TEST keyring_file 00:43:36.652 ************************************ 00:43:36.652 10:18:05 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:43:36.652 10:18:05 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:36.652 10:18:05 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:36.652 10:18:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:36.652 10:18:05 -- common/autotest_common.sh@10 -- # set +x 00:43:36.909 ************************************ 00:43:36.909 START TEST keyring_linux 00:43:36.909 ************************************ 00:43:36.909 10:18:05 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:36.909 Joined session keyring: 509882617 00:43:36.909 * Looking for test storage... 00:43:36.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:36.909 10:18:05 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:36.909 10:18:05 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:43:36.909 10:18:05 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:36.909 10:18:05 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:36.909 10:18:05 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:36.909 10:18:05 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:36.909 --rc genhtml_branch_coverage=1 00:43:36.909 --rc genhtml_function_coverage=1 00:43:36.909 --rc genhtml_legend=1 00:43:36.909 --rc geninfo_all_blocks=1 00:43:36.909 --rc geninfo_unexecuted_blocks=1 00:43:36.909 00:43:36.909 ' 00:43:36.909 10:18:05 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:36.909 --rc genhtml_branch_coverage=1 00:43:36.909 --rc genhtml_function_coverage=1 00:43:36.909 --rc genhtml_legend=1 00:43:36.909 --rc geninfo_all_blocks=1 00:43:36.909 --rc geninfo_unexecuted_blocks=1 00:43:36.909 00:43:36.909 ' 00:43:36.909 10:18:05 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:36.909 --rc genhtml_branch_coverage=1 00:43:36.909 --rc genhtml_function_coverage=1 00:43:36.909 --rc genhtml_legend=1 00:43:36.909 --rc geninfo_all_blocks=1 00:43:36.909 --rc geninfo_unexecuted_blocks=1 00:43:36.909 00:43:36.909 ' 00:43:36.909 10:18:05 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:36.909 --rc genhtml_branch_coverage=1 00:43:36.909 --rc genhtml_function_coverage=1 00:43:36.909 --rc genhtml_legend=1 00:43:36.909 --rc geninfo_all_blocks=1 00:43:36.909 --rc geninfo_unexecuted_blocks=1 00:43:36.909 00:43:36.909 ' 00:43:36.909 10:18:05 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:36.909 10:18:05 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:36.909 10:18:05 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:36.909 10:18:05 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:36.909 10:18:05 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:36.909 10:18:05 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:36.909 10:18:05 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:36.909 10:18:05 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:36.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:36.909 10:18:05 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:36.909 10:18:05 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:36.909 10:18:05 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:36.909 10:18:05 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:36.909 10:18:05 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:36.909 10:18:05 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:36.910 10:18:05 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:36.910 10:18:05 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:36.910 10:18:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:36.910 10:18:05 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:36.910 10:18:05 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:36.910 10:18:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:36.910 10:18:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:36.910 10:18:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:36.910 10:18:05 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:36.910 10:18:05 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:43:36.910 10:18:05 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:36.910 10:18:05 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:43:36.910 10:18:05 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:43:36.910 10:18:05 keyring_linux -- nvmf/common.sh@729 -- # python - 00:43:37.167 10:18:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:37.167 10:18:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:37.167 /tmp/:spdk-test:key0 00:43:37.167 10:18:05 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:37.167 10:18:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:37.167 10:18:05 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:37.167 10:18:05 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:37.167 10:18:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:37.167 10:18:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:37.167 10:18:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:37.167 10:18:05 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:37.167 10:18:05 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:43:37.167 10:18:05 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:37.167 10:18:05 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:43:37.167 10:18:05 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:43:37.167 10:18:05 keyring_linux -- nvmf/common.sh@729 -- # python - 00:43:37.167 10:18:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:37.167 10:18:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:37.167 /tmp/:spdk-test:key1 00:43:37.167 10:18:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1591099 00:43:37.167 10:18:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1591099 00:43:37.167 10:18:05 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:37.167 10:18:05 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1591099 ']' 00:43:37.167 10:18:05 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:37.167 10:18:05 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:37.167 10:18:05 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:37.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:37.167 10:18:05 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:37.167 10:18:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:37.167 [2024-12-07 10:18:05.728330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:37.167 [2024-12-07 10:18:05.728380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591099 ] 00:43:37.167 [2024-12-07 10:18:05.781380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:37.167 [2024-12-07 10:18:05.822771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:37.424 10:18:06 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:37.424 10:18:06 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:43:37.424 10:18:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:37.424 10:18:06 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:37.424 10:18:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:37.424 [2024-12-07 10:18:06.026307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:37.424 null0 00:43:37.424 [2024-12-07 10:18:06.058369] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:37.424 [2024-12-07 10:18:06.058725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:37.424 10:18:06 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:37.424 10:18:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:37.424 744396468 00:43:37.424 10:18:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:37.424 662192138 00:43:37.424 10:18:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1591277 00:43:37.424 10:18:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1591277 /var/tmp/bperf.sock 00:43:37.424 10:18:06 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:37.424 10:18:06 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1591277 ']' 00:43:37.424 10:18:06 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:37.424 10:18:06 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:37.424 10:18:06 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:37.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:37.424 10:18:06 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:37.424 10:18:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:37.424 [2024-12-07 10:18:06.129022] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:37.424 [2024-12-07 10:18:06.129065] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591277 ] 00:43:37.681 [2024-12-07 10:18:06.183208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:37.681 [2024-12-07 10:18:06.224397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:37.681 10:18:06 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:37.681 10:18:06 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:43:37.681 10:18:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:37.681 10:18:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:37.938 10:18:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:37.938 10:18:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:38.195 10:18:06 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:38.195 10:18:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:38.195 [2024-12-07 10:18:06.895233] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:38.452 nvme0n1 00:43:38.452 10:18:06 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:38.452 10:18:06 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:38.452 10:18:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:38.452 10:18:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:38.452 10:18:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:38.452 10:18:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:38.710 10:18:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:38.710 10:18:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:38.710 10:18:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:38.710 10:18:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:38.710 10:18:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:38.710 10:18:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:38.710 10:18:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:38.710 10:18:07 keyring_linux -- keyring/linux.sh@25 -- # sn=744396468 00:43:38.710 10:18:07 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:38.710 10:18:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:38.710 10:18:07 keyring_linux -- keyring/linux.sh@26 -- # [[ 744396468 == \7\4\4\3\9\6\4\6\8 ]] 00:43:38.710 10:18:07 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 744396468 00:43:38.710 10:18:07 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:38.710 10:18:07 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:38.966 Running I/O for 1 seconds... 00:43:39.895 17775.00 IOPS, 69.43 MiB/s 00:43:39.895 Latency(us) 00:43:39.895 [2024-12-07T09:18:08.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:39.895 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:39.895 nvme0n1 : 1.01 17769.78 69.41 0.00 0.00 7174.29 6012.22 13905.03 00:43:39.895 [2024-12-07T09:18:08.621Z] =================================================================================================================== 00:43:39.895 [2024-12-07T09:18:08.621Z] Total : 17769.78 69.41 0.00 0.00 7174.29 6012.22 13905.03 00:43:39.895 { 00:43:39.895 "results": [ 00:43:39.895 { 00:43:39.895 "job": "nvme0n1", 00:43:39.895 "core_mask": "0x2", 00:43:39.895 "workload": "randread", 00:43:39.895 "status": "finished", 00:43:39.895 "queue_depth": 128, 00:43:39.895 "io_size": 4096, 00:43:39.895 "runtime": 1.007497, 00:43:39.895 "iops": 17769.77995964256, 00:43:39.895 "mibps": 69.41320296735375, 00:43:39.895 "io_failed": 0, 00:43:39.895 "io_timeout": 0, 00:43:39.895 "avg_latency_us": 7174.286808768995, 00:43:39.895 "min_latency_us": 6012.215652173913, 00:43:39.895 "max_latency_us": 13905.029565217392 00:43:39.895 } 00:43:39.895 ], 00:43:39.895 "core_count": 1 00:43:39.895 } 00:43:39.895 10:18:08 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:39.895 10:18:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:40.152 10:18:08 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:40.152 10:18:08 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:40.152 10:18:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:40.152 10:18:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:40.152 10:18:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:40.152 10:18:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:40.409 10:18:08 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:40.409 10:18:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:40.409 10:18:08 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:40.409 10:18:08 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:40.409 10:18:08 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:43:40.409 10:18:08 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:40.409 10:18:08 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:40.409 10:18:08 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:40.409 10:18:08 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:40.409 10:18:08 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:40.409 10:18:08 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:40.409 10:18:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:40.409 [2024-12-07 10:18:09.097587] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:40.409 [2024-12-07 10:18:09.098239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc5270 (107): Transport endpoint is not connected 00:43:40.409 [2024-12-07 10:18:09.099234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc5270 (9): Bad file descriptor 00:43:40.409 [2024-12-07 10:18:09.100234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:40.409 [2024-12-07 10:18:09.100245] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:40.409 [2024-12-07 10:18:09.100254] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:40.409 [2024-12-07 10:18:09.100263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:40.409 request: 00:43:40.409 { 00:43:40.409 "name": "nvme0", 00:43:40.409 "trtype": "tcp", 00:43:40.409 "traddr": "127.0.0.1", 00:43:40.409 "adrfam": "ipv4", 00:43:40.409 "trsvcid": "4420", 00:43:40.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:40.409 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:40.409 "prchk_reftag": false, 00:43:40.409 "prchk_guard": false, 00:43:40.409 "hdgst": false, 00:43:40.409 "ddgst": false, 00:43:40.409 "psk": ":spdk-test:key1", 00:43:40.409 "allow_unrecognized_csi": false, 00:43:40.409 "method": "bdev_nvme_attach_controller", 00:43:40.409 "req_id": 1 00:43:40.409 } 00:43:40.409 Got JSON-RPC error response 00:43:40.409 response: 00:43:40.409 { 00:43:40.409 "code": -5, 00:43:40.409 "message": "Input/output error" 00:43:40.409 } 00:43:40.409 10:18:09 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:43:40.409 10:18:09 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:40.409 10:18:09 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:40.409 10:18:09 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@33 -- # sn=744396468 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 744396468 00:43:40.409 1 links removed 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@33 -- # sn=662192138 00:43:40.409 10:18:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 662192138 00:43:40.666 1 links removed 00:43:40.666 10:18:09 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1591277 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1591277 ']' 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1591277 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1591277 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1591277' 00:43:40.666 killing process with pid 1591277 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@969 -- # kill 1591277 00:43:40.666 Received shutdown signal, test time was about 1.000000 seconds 00:43:40.666 00:43:40.666 Latency(us) 00:43:40.666 [2024-12-07T09:18:09.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:40.666 [2024-12-07T09:18:09.392Z] =================================================================================================================== 00:43:40.666 [2024-12-07T09:18:09.392Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@974 -- # wait 1591277 00:43:40.666 10:18:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1591099 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1591099 ']' 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1591099 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:40.666 10:18:09 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1591099 00:43:40.923 10:18:09 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:40.923 10:18:09 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:40.923 10:18:09 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1591099' 00:43:40.923 killing process with pid 1591099 00:43:40.923 10:18:09 keyring_linux -- common/autotest_common.sh@969 -- # kill 1591099 00:43:40.923 10:18:09 keyring_linux -- common/autotest_common.sh@974 -- # wait 1591099 00:43:41.180 00:43:41.180 real 0m4.347s 00:43:41.180 user 0m7.853s 00:43:41.180 sys 0m1.546s 00:43:41.180 10:18:09 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:41.180 10:18:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:41.180 ************************************ 00:43:41.180 END TEST keyring_linux 00:43:41.180 ************************************ 00:43:41.180 10:18:09 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:43:41.180 10:18:09 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:41.180 10:18:09 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:41.180 10:18:09 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:43:41.180 10:18:09 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:43:41.180 10:18:09 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:43:41.180 10:18:09 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:41.180 10:18:09 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:41.180 10:18:09 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:41.180 10:18:09 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:43:41.180 10:18:09 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:41.180 10:18:09 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:43:41.180 10:18:09 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:41.180 10:18:09 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:41.180 10:18:09 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:43:41.180 10:18:09 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:43:41.180 10:18:09 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:43:41.180 10:18:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:41.180 10:18:09 -- common/autotest_common.sh@10 -- # set +x 00:43:41.180 10:18:09 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:43:41.180 10:18:09 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:43:41.180 10:18:09 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:43:41.180 10:18:09 -- common/autotest_common.sh@10 -- # set +x 00:43:46.440 INFO: APP EXITING 00:43:46.440 INFO: killing all VMs 00:43:46.440 INFO: killing vhost app 00:43:46.440 INFO: EXIT DONE 00:43:47.812 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:43:47.812 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:43:47.812 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:43:48.069 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:43:48.069 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:43:48.069 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:43:48.069 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:43:48.069 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:43:48.069 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:43:48.069 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:43:48.069 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:43:48.069 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:43:48.069 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:43:48.069 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:43:48.069 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:43:48.325 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:43:48.325 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:43:50.851 Cleaning 00:43:50.851 Removing: /var/run/dpdk/spdk0/config 00:43:50.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:50.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:50.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:50.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:50.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:50.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:50.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:50.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:50.851 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:50.851 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:50.851 Removing: /var/run/dpdk/spdk1/config 00:43:50.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:50.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:50.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:50.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:50.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:50.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:50.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:50.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:50.851 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:50.851 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:50.851 Removing: /var/run/dpdk/spdk2/config 00:43:50.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:50.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:50.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:50.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:50.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:50.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:50.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:50.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:50.851 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:50.851 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:50.851 Removing: /var/run/dpdk/spdk3/config 00:43:50.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:50.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:50.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:50.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:50.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:50.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:50.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:50.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:50.851 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:50.851 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:50.851 Removing: /var/run/dpdk/spdk4/config 00:43:50.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:50.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:50.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:50.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:50.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:50.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:50.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:50.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:50.851 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:50.851 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:50.851 Removing: /dev/shm/bdev_svc_trace.1 00:43:50.851 Removing: /dev/shm/nvmf_trace.0 00:43:50.851 Removing: /dev/shm/spdk_tgt_trace.pid1038462 00:43:50.851 Removing: /var/run/dpdk/spdk0 00:43:50.851 Removing: /var/run/dpdk/spdk1 00:43:50.851 Removing: /var/run/dpdk/spdk2 00:43:50.851 Removing: /var/run/dpdk/spdk3 00:43:50.851 Removing: /var/run/dpdk/spdk4 00:43:50.851 Removing: /var/run/dpdk/spdk_pid1036320 00:43:50.851 Removing: /var/run/dpdk/spdk_pid1037382 00:43:50.851 Removing: /var/run/dpdk/spdk_pid1038462 00:43:50.851 Removing: /var/run/dpdk/spdk_pid1039106 00:43:50.851 Removing: /var/run/dpdk/spdk_pid1040050 00:43:50.851 Removing: /var/run/dpdk/spdk_pid1040068 00:43:50.851 Removing: /var/run/dpdk/spdk_pid1041041 00:43:50.851 Removing: /var/run/dpdk/spdk_pid1041239 00:43:50.851 Removing: /var/run/dpdk/spdk_pid1041423 00:43:50.851 Removing: /var/run/dpdk/spdk_pid1043134 00:43:50.851 Removing: /var/run/dpdk/spdk_pid1044414 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1044709 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1044996 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1045300 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1045590 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1045775 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1045965 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1046279 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1047101 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1050057 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1050161 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1050415 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1050418 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1050920 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1050936 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1051418 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1051537 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1051904 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1051914 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1052173 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1052181 00:43:50.852 Removing: /var/run/dpdk/spdk_pid1052746 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1052997 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1053295 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1056913 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1061258 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1071289 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1071981 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1076315 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1076622 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1081274 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1086941 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1089542 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1099767 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1108467 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1110299 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1111227 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1127864 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1132230 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1214177 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1219574 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1225324 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1231120 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1231133 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1232033 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1232944 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1233819 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1234323 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1234332 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1234563 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1234786 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1234790 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1235701 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1236431 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1237343 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1237957 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1238032 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1238262 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1239276 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1240264 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1248472 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1277114 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1281510 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1283303 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1285459 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1285685 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1285724 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1285938 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1286442 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1288266 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1289036 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1289529 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1291629 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1292124 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1292631 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1296685 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1302064 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1302065 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1302066 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1305837 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1309586 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1314385 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1349802 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1353631 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1359617 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1360915 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1362319 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1363784 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1368641 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1372628 00:43:51.111 Removing: /var/run/dpdk/spdk_pid1379898 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1379995 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1384485 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1384712 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1384945 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1385302 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1385411 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1386780 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1388403 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1390002 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1391604 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1393199 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1394880 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1400788 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1401296 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1402996 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1404009 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1410238 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1412939 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1418164 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1423300 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1431803 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1438814 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1438820 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1457676 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1458351 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1458822 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1459301 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1460032 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1460513 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1461071 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1461666 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1465708 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1465937 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1471997 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1472060 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1477339 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1481530 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1491042 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1491716 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1495745 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1496038 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1500346 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1506143 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1508731 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1518520 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1527105 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1528716 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1529632 00:43:51.369 Removing: /var/run/dpdk/spdk_pid1545308 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1549119 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1552266 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1559594 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1559599 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1564621 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1566465 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1568328 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1569390 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1571350 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1572621 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1581131 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1581598 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1582058 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1584311 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1584775 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1585240 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1588921 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1589059 00:43:51.370 Removing: /var/run/dpdk/spdk_pid1590693 00:43:51.627 Removing: /var/run/dpdk/spdk_pid1591099 00:43:51.627 Removing: /var/run/dpdk/spdk_pid1591277 00:43:51.627 Clean 00:43:51.627 10:18:20 -- common/autotest_common.sh@1451 -- # return 0 00:43:51.627 10:18:20 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:43:51.627 10:18:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:51.627 10:18:20 -- common/autotest_common.sh@10 -- # set +x 00:43:51.627 10:18:20 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:43:51.627 10:18:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:51.627 10:18:20 -- common/autotest_common.sh@10 -- # set +x 00:43:51.627 10:18:20 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:51.627 10:18:20 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:51.627 10:18:20 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:51.627 10:18:20 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:43:51.627 10:18:20 -- spdk/autotest.sh@394 -- # hostname 00:43:51.627 10:18:20 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:51.885 geninfo: WARNING: invalid characters removed from testname! 00:44:13.795 10:18:40 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:15.170 10:18:43 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:17.183 10:18:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:19.082 10:18:47 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:20.979 10:18:49 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:22.895 10:18:51 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:24.793 10:18:53 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:24.793 10:18:53 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:44:24.793 10:18:53 -- common/autotest_common.sh@1681 -- $ lcov --version 00:44:24.793 10:18:53 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:44:24.793 10:18:53 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:44:24.793 10:18:53 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:44:24.793 10:18:53 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:44:24.793 10:18:53 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:44:24.793 10:18:53 -- scripts/common.sh@336 -- $ IFS=.-: 00:44:24.793 10:18:53 -- scripts/common.sh@336 -- $ read -ra ver1 00:44:24.793 10:18:53 -- scripts/common.sh@337 -- $ IFS=.-: 00:44:24.793 10:18:53 -- scripts/common.sh@337 -- $ read -ra ver2 00:44:24.793 10:18:53 -- scripts/common.sh@338 -- $ local 'op=<' 00:44:24.793 10:18:53 -- scripts/common.sh@340 -- $ ver1_l=2 00:44:24.793 10:18:53 -- scripts/common.sh@341 -- $ ver2_l=1 00:44:24.793 10:18:53 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:44:24.793 10:18:53 -- scripts/common.sh@344 -- $ case "$op" in 00:44:24.793 10:18:53 -- scripts/common.sh@345 -- $ : 1 00:44:24.793 10:18:53 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:44:24.793 10:18:53 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:24.793 10:18:53 -- scripts/common.sh@365 -- $ decimal 1 00:44:24.793 10:18:53 -- scripts/common.sh@353 -- $ local d=1 00:44:24.793 10:18:53 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:44:24.793 10:18:53 -- scripts/common.sh@355 -- $ echo 1 00:44:24.793 10:18:53 -- scripts/common.sh@365 -- $ ver1[v]=1 00:44:24.793 10:18:53 -- scripts/common.sh@366 -- $ decimal 2 00:44:24.793 10:18:53 -- scripts/common.sh@353 -- $ local d=2 00:44:24.793 10:18:53 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:44:24.793 10:18:53 -- scripts/common.sh@355 -- $ echo 2 00:44:24.793 10:18:53 -- scripts/common.sh@366 -- $ ver2[v]=2 00:44:24.793 10:18:53 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:44:24.793 10:18:53 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:44:24.793 10:18:53 -- scripts/common.sh@368 -- $ return 0 00:44:24.793 10:18:53 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:24.793 10:18:53 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:44:24.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:24.793 --rc genhtml_branch_coverage=1 00:44:24.793 --rc genhtml_function_coverage=1 00:44:24.793 --rc genhtml_legend=1 00:44:24.793 --rc geninfo_all_blocks=1 00:44:24.793 --rc geninfo_unexecuted_blocks=1 00:44:24.793 00:44:24.793 ' 00:44:24.793 10:18:53 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:44:24.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:24.793 --rc genhtml_branch_coverage=1 00:44:24.793 --rc genhtml_function_coverage=1 00:44:24.793 --rc genhtml_legend=1 00:44:24.793 --rc geninfo_all_blocks=1 00:44:24.793 --rc geninfo_unexecuted_blocks=1 00:44:24.793 00:44:24.793 ' 00:44:24.793 10:18:53 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:44:24.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:24.793 --rc genhtml_branch_coverage=1 00:44:24.793 --rc genhtml_function_coverage=1 00:44:24.793 --rc genhtml_legend=1 00:44:24.793 --rc geninfo_all_blocks=1 00:44:24.793 --rc geninfo_unexecuted_blocks=1 00:44:24.793 00:44:24.793 ' 00:44:24.793 10:18:53 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:44:24.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:24.793 --rc genhtml_branch_coverage=1 00:44:24.793 --rc genhtml_function_coverage=1 00:44:24.793 --rc genhtml_legend=1 00:44:24.793 --rc geninfo_all_blocks=1 00:44:24.793 --rc geninfo_unexecuted_blocks=1 00:44:24.793 00:44:24.793 ' 00:44:24.793 10:18:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:24.793 10:18:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:44:24.793 10:18:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:44:24.793 10:18:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:24.793 10:18:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:24.793 10:18:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.794 10:18:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.794 10:18:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.794 10:18:53 -- paths/export.sh@5 -- $ export PATH 00:44:24.794 10:18:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.794 10:18:53 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:44:24.794 10:18:53 -- common/autobuild_common.sh@479 -- $ date +%s 00:44:24.794 10:18:53 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733563133.XXXXXX 00:44:24.794 10:18:53 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733563133.QdXuTG 00:44:24.794 10:18:53 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:44:24.794 10:18:53 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:44:24.794 10:18:53 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:44:24.794 10:18:53 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:44:24.794 10:18:53 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:44:24.794 10:18:53 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:44:24.794 10:18:53 -- common/autobuild_common.sh@495 -- $ get_config_params 00:44:24.794 10:18:53 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:44:24.794 10:18:53 -- common/autotest_common.sh@10 -- $ set +x 00:44:24.794 10:18:53 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:44:24.794 10:18:53 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:44:24.794 10:18:53 -- pm/common@17 -- $ local monitor 00:44:24.794 10:18:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:24.794 10:18:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:24.794 10:18:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:24.794 10:18:53 -- pm/common@21 -- $ date +%s 00:44:24.794 10:18:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:24.794 10:18:53 -- pm/common@21 -- $ date +%s 00:44:24.794 10:18:53 -- pm/common@25 -- $ sleep 1 00:44:24.794 10:18:53 -- pm/common@21 -- $ date +%s 00:44:24.794 10:18:53 -- pm/common@21 -- $ date +%s 00:44:24.794 10:18:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1733563133 00:44:24.794 10:18:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1733563133 00:44:24.794 10:18:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1733563133 00:44:24.794 10:18:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1733563133 00:44:24.794 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1733563133_collect-vmstat.pm.log 00:44:24.794 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1733563133_collect-cpu-load.pm.log 00:44:24.794 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1733563133_collect-cpu-temp.pm.log 00:44:24.794 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1733563133_collect-bmc-pm.bmc.pm.log 00:44:25.729 10:18:54 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:44:25.729 10:18:54 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:44:25.729 10:18:54 -- spdk/autopackage.sh@14 -- $ timing_finish 00:44:25.729 10:18:54 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:25.729 10:18:54 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:25.729 10:18:54 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:25.729 10:18:54 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:44:25.729 10:18:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:44:25.729 10:18:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:44:25.729 10:18:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:25.729 10:18:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:44:25.729 10:18:54 -- pm/common@44 -- $ pid=1602692 00:44:25.729 10:18:54 -- pm/common@50 -- $ kill -TERM 1602692 00:44:25.729 10:18:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:25.729 10:18:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:44:25.729 10:18:54 -- pm/common@44 -- $ pid=1602694 00:44:25.729 10:18:54 -- pm/common@50 -- $ kill -TERM 1602694 00:44:25.729 10:18:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:25.729 10:18:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:44:25.729 10:18:54 -- pm/common@44 -- $ pid=1602696 00:44:25.729 10:18:54 -- pm/common@50 -- $ kill -TERM 1602696 00:44:25.729 10:18:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:25.729 10:18:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:44:25.729 10:18:54 -- pm/common@44 -- $ pid=1602718 00:44:25.729 10:18:54 -- pm/common@50 -- $ sudo -E kill -TERM 1602718 00:44:25.729 + [[ -n 943939 ]] 00:44:25.729 + sudo kill 943939 00:44:25.739 [Pipeline] } 00:44:25.757 [Pipeline] // stage 00:44:25.762 [Pipeline] } 00:44:25.778 [Pipeline] // timeout 00:44:25.784 [Pipeline] } 00:44:25.799 [Pipeline] // catchError 00:44:25.805 [Pipeline] } 00:44:25.820 [Pipeline] // wrap 00:44:25.826 [Pipeline] } 00:44:25.838 [Pipeline] // catchError 00:44:25.849 [Pipeline] stage 00:44:25.851 [Pipeline] { (Epilogue) 00:44:25.863 [Pipeline] catchError 00:44:25.865 [Pipeline] { 00:44:25.877 [Pipeline] echo 00:44:25.879 Cleanup processes 00:44:25.886 [Pipeline] sh 00:44:26.165 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:26.165 1602846 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:44:26.165 1603191 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:26.180 [Pipeline] sh 00:44:26.461 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:26.461 ++ grep -v 'sudo pgrep' 00:44:26.461 ++ awk '{print $1}' 00:44:26.461 + sudo kill -9 1602846 00:44:26.474 [Pipeline] sh 00:44:26.755 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:38.945 [Pipeline] sh 00:44:39.223 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:39.223 Artifacts sizes are good 00:44:39.238 [Pipeline] archiveArtifacts 00:44:39.245 Archiving artifacts 00:44:39.403 [Pipeline] sh 00:44:39.681 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:39.694 [Pipeline] cleanWs 00:44:39.702 [WS-CLEANUP] Deleting project workspace... 00:44:39.703 [WS-CLEANUP] Deferred wipeout is used... 00:44:39.708 [WS-CLEANUP] done 00:44:39.709 [Pipeline] } 00:44:39.728 [Pipeline] // catchError 00:44:39.741 [Pipeline] sh 00:44:40.023 + logger -p user.info -t JENKINS-CI 00:44:40.031 [Pipeline] } 00:44:40.043 [Pipeline] // stage 00:44:40.048 [Pipeline] } 00:44:40.058 [Pipeline] // node 00:44:40.062 [Pipeline] End of Pipeline 00:44:40.173 Finished: SUCCESS